Four or five Christmases ago, we received a gift called Alexa.
She is about 8 inches tall, round, smart and mysterious. Alexa is fun! She can play music, answer questions and acts as a verbal encyclopedia. I think she must be related to Sari and Google and any other voices out there in the great beyond.
But whatever name you want to call her, she is a part of Artificial Intelligence – a new idea I find fascinating and a bit crazy.
She has invaded my computer! I don’t remember inviting her, but it’s interesting.
I recently wrote a column that was a letter to Alexander Graham Bell, asking him if he was aware of all that has been done with the telephone he invented so long ago. I didn’t expect an answer.
I learned that Alexander didn’t want a telephone in his office or study, because he found it distracting.
I didn’t know this and I didn’t find it in an encyclopedia. It just came to me over the Internet from The Conversation, written by Bruce Schneider, Adjunct Lecturer in Public Policy, Harvard Kennedy School and Nathan Sanders, Affiliate, Berkman Klein Center for Internet and Society, also from Harvard. The article said because newer generations of AI Models are more sophisticated, people will need to learn to approach AI skeptically, deliberately constructing the output they give it and thinking critically about its output.
“As AI Chatboxes become more powerful, how do we know they’re working in our best interest?” asks Carol Yepes/Moment via Getty.
What distinguishes AI systems from other internet services is how interactive they are and how these interactions will increasingly become like relationships .
Yepes imagines AI planning trips for us, negotiating on our behalf with therapists and life coaches.
“They are likely to be with you 24/7, know you intimately, and be able to anticipate your needs. This kind of conversational interface to the vast network of services and resources on the web is within the capabilities of existing generative AIs like ChatGPT. They are on track to become personalized digital assistants.”
“The AIs of the future should be trustworthy, but unless the government delivers protections for IA products, people will be on their own to guess at potential risks, biases and negative effects of their experiences with them,” Yepes concludes.
So, to answer my question: Can we trust AI and should we?
Yes, we can, but perhaps we shouldn’t, unless the government delivers the appropriate protection for AI products.
Carole Ledbetter is a former, long-time Write Team member who resides in Ottawa.