It’s popular in some circles to argue that we are unleashing AI as a completely untested technology with possibly dangerous implications on society. My own experience is different. Ever since I studied intelligent systems, connectionism and neural networks in the 90s I’ve tracked the development of AI. Much of my academic career has been based on this understanding. I’ve watched for 20 years as my colleagues at NRC achieved world-leading results in machine learning and, later, deep learning.
Meanwhile, AI technologies have been in the process of being deployed – slowly, carefully – over the last decade. Expert systems were used to power services like WebMD. Recognition systems were deployed to anticipate failures in oil pipelines and airplane engines. Translation services slowly – and sometimes painfully – became more reliable. Some of these I’ve watched, but many of them I’ve used.
Let me list a few:
– As just mentioned, translation engines have become more and more reliable over the years. While I can mostly read western European languages without aid, I am utterly dependent on AI to read languages that are completely foreign to me, like Armenian and Chinese. Through AI, I have discovered the magic of Tang poetry, and (sometimes) send responses to my Chinese friends in their own language.
– Of course, I am not completely fluent even in Western European languages, which is why it was really helpful in Madrid last month to be able to use my Pixel phone camera to view and translate the labels on things like skin cream and toothpaste right in the pharmacy, no text editor needed.
– I’ve been using Google’s audio recorder to generate in real time text transcripts of my presentations over the last year or two. It’s nowhere near perfect – what I really need is something that transcribes what I meant rather than just what I said – but this has saved me hours and hours of time needed to produce decent text versions of my talks.
– Also in Madrid, I attended a conference that was talking place almost entirely in Spanish. They were using audio-to-text translation on the presentation screen, which I was able to follow pretty well, but it got a lot easier to keep up with the pace when they switched to read-time Spanish audio to English text translation. Still not perfect (it was interesting to see how the AI corrected its translations as more words were added) but pretty good.
– I’ve decided, though, that I need to become fluent in Spanish, do I’m using Duolingo for language study. I’m not there yet, but I can feel my Spanish skills improving. As Duolingo says, I’m ‘not a beginner’.
– It was actually pretty cold in Spain, even though it was March, and I depended on weather predictions. These are increasingly accurate these days because of the AI used to develop and apply forecasting models.
– I travelled around Madrid using the Metro; the city has excellent service. Madrid’s metro uses AI to reduce emissions and improve air quality throughout the underground network of tunnels and stations.
– Security is important to me, especially when I travel. Losing my phone would be bad enough, but at least I can protect the contents of my phone with a biometric login system (it can recognize my face, put I prefer fingerprint detection, so it doesn’t accidentally open my phone every time I look at it).
– I’ve also received a number of calls over the years as my sometimes unusual purchases at obscure destinations have triggered AI banking security systems on my accounts and credit cards (by ‘unusual’ we mean ‘uncharacteristic for me‘).
– At home when I travel, I often drive my car. My insurance rates drop each time I get a new car because of increasingly reliable AI safety systems. In particular, I find the adaptive cruise control to be a lifesaver. The car also warns me when I’m drifting out of my lane. In theory it can stay on track without relying on me to steer, but for now I steer my own car and depend on it for suggestions.
– When I travel I take a lot of photos. My D-750 is packed with sensors helping it do everything from auto-focusing to automated white balancing.
– I use Topaz AI when I’m processing my photos. My main go-to is the Denoise function, which removes the speckles (or ‘grain’) produced when shooting images at low light, providing a nice smooth image that looks more like what was actually in front of the camera. I also sometimes use the AI sharpen function, but not nearly as often, because it’s not as good at feature recognition as I would like.
– I share my photos – about 39,000 of them so far, each one individually edited (imaging doing that without the tools!) – so they’ve available to AI image-generation software to use as examples (nobody cares whether AI uses Getty images, because there’s so much freely licensed stuff out there). I’ve used DALL-E a few times to generate images, including my current Mastodon icon (pictured, above).
– As readers know, I do a lot of reading for my newsletter. I follow almost a thousand RSS feeds. Over the last few years, I’ve started using an AI engine called Leo to organize my news items into different categories, allowing me to track input for several projects at a time, as well as spot the best posts for me. What’s notable is that Leo is my personal AI engine – I’m the one training it. So it doesn’t depend on other people’s priorities or background.
– I’m also subjected to bad centrally-managed content recommendations, like the algorithms that power YouTube or Netflix recommendations. These are bad – and it’s pretty easy to spot the bad recommendation systems as compared to the good ones. Basically – you need a lot more options to choose from tens of (thousands of news items as compared to hundreds of TV shows) and you need a lot more personalization (tens of thousands of points of feedback instead of hundreds).
– and yes, I’ve tried automated text generation. The other day I asked it for a list of survey validation methods; it gave me a good (if generic) list, which I used to plan my research for that section. I haven’t used it for any other writing, and don’t plan to. ChatGPT is, as everyone I think understands, still in the experimentation and testing stage.
I have no doubt that I use other AI applications without being aware of it. I don’t have a problem with that. As with any tool or technology, sometimes things can go wrong (why, even my bicycle gets flats!) but the trick is to recognize when it’s doing something unusual. You have to have confidence in your own judgment, whether as a driver, a photographer, or a writer.
I think a lot of fears about AI aren’t fears about AI so much but rather reflect uncertainty about our own skills and ability to manage it. And fears like this are something we as humans have to deal with all the time. I have had to management my fear of other people throughout my life, for example, which really can only be addressed through experience and developing better social skills (not a strength of mine).I took me a long time to manage my fear when travelling to other countries.
It is responsible and prudent to say that AI should be tested and managed to ensure it doesn’t cause harms. This is true of every technology. Think about how much work goes into ensuring the safety of air travel. Or our cars – consider how much safer we are because of seat belts and airbags and even the AI tools mentioned above. Of course, our entire motor transportation system could be a lot safer.
But it’s not responsible to act as though today is the first day of AI, and that we have no idea how safe it is, what the dangers are, and what the benefits are. A lot of research has been conducted. I’ve produced thousands of pages of admittedly incomplete documentation of the applications, the risks, and the way we’ve responded to them. Thousands of really smart people have spent a lot of time on this. We are – really – entering the age of AI with our eyes open (far more so that we entered the age of, say, internal combustion).
And it’s also not responsible to implicate AI in all the other problems in our society. Of course we have great problems with unethical behaviour generally, especially by our political and corporate leaders and among the wealthy. Inequality is a real problem. We are also dealing with the legacy and current practices of colonialism. Racism is a problem. So is religious persecution and persecution by religions.
But again, it’s not like we haven’t thought about the ethics of AI, and the ethics of technology in general. Arguably, an entire discipline – the ethics of care – has emerged in response to the unfeeling dictates of the professions, institutions, and the tools they use.
The same with environmental concerns. We are in a period of global environmental degradation (quaintly referenced as ‘climate change’). Our irresponsible use of fossil fuels (which continues to this day) along with the depletion of other resources and long-term harms caused by agriculture and urbanization are damaging the environment, making it more difficult to sustain the flourishing and diversity of life on which we all depend.
AI consumes energy, like everything else we do, but it isn’t the cause of environmental degradation. No, the cause is the use of coal rather than solar, oil rather than wind. In Ontario, where I live, almost all electricity production is emissions-free. So the use of AI here doesn’t cause much environmental degradation at all (though our currently regressive government is turning the clock back on that).
None of this is automatic. I don’t suppose for a moment that there isn’t a lot of work to do to more global society to one that is more ethical, equitable, and environmental. All I’m saying here is that it doesn’t make sense to hang all this on AI. Indeed, some of the solutions may be found in AI (though, again, most of the solutions will be found in directly addressing these social issues).
Provided for OEB Global 2023 by Stephen Downes. Read the original article here.