Artificial Intelligence

(Topics: Science & Technology | Back to Home)

There is a legend of a man named John Henry, a laborer who worked on the railroads. His job was to hammer steel spikes into solid rock, making holes for dynamite charges. Henry was the best at what he did, working harder and faster than anyone else.

But then, a contest was organized between John Henry and a steam-powered drilling machine, also known as a jackhammer. The competition ran until finally the man beat the machine, only to have his heart give out from exhaustion. The jackhammer, of course, was put back to work.

Construction equipment, home appliances, vehicles, and all sorts of forms of technology have changed our world. They have replaced the need to do things by hand which can now be done by machine. Computers, too, have transformed the need to think. We don't need to calculate, tabulate, or manually save data any longer. There are electronic machines that do it for us.

All of us know that it is only a matter of time before computers can think. We are already seeing it with the personal assistants in our phones. The recommendation engines that companies use to offer products or movies. Even when you're typing an email or searching the web, the words you might use appear to the right of your cursor. These suggestions are getting better and better. We can all imagine that soon it will go to whole sentences and paragraphs. Artificial intelligence (AI) is coming.

New technologies appear all of the time, but this is different for two reasons. First: a new form of intelligence is not something human beings has ever engaged with before. Second: the rate of growth of technology is already exponential. That means it's likely to be here before we know it.

Safety Research

There are an enormous number of full-time researchers and professionals trying to build intelligent systems. But a few are focused on AI safety. To quote Wyatt Berlinic:

AI Safety asserts that AI can be beneficial or detrimental and, without working to make it beneficial, it will be detrimental by default. In the same way that a poorly designed building might collapse and harm thousands, a poorly designed self-driving car might cause many crashes and harm thousands. AI Safety is work and research that ensures AI is beneficial, not harmful.

This topic can feel easy to dismiss as science fiction. But if we all pay attention, that which was science fiction only a few years ago has become science fact.

The role of government is to encourage AI safety research with at least as much support as applications of AI. That means understanding what is possible and what should be pursued or limited.

Or to put it more simply: just because we can, doesn't mean we should.

Confidently Incorrect, Absolutely Not Understood

Another issue with current AI (and likely much of what we will have in the near future) is that while it is able to generate responses that appear to be sensible it has no way of understanding what it made and no process for self-evaluation. Answers to questions may sound authoritative but be totally wrong [1]. Engineering questions and code snippets provided by these tools are off more than half the time [2]. We're already trying to use AI for medical diagnostics [3]; who do we blame when mistakes are made by algorithims that don't even know what it means to make a mistake? Given how little we trust eachother what happens when we make decisions through false data from algorithims instead of what has been our guide through all of history: logic, consensus, and shared understanding?

Perhaps the most profound issue with artificial intelligence is that we don't really understand how it works. Virtually every other complicated technical process that machines do on our behalf can be broken down into smaller steps. You don't have to know how a car operates to be able to drive one, but it is straightforward enough to look at all of the pieces one by one to see how they fit together.

Yet with AI there often is no way to explain what is happening, other than to acknowledge the effectiveness of the emergent behavior. The computer system can give us answers but it is not arriving at those answers in a way that makes sense to humans [4].

And that, alone, is reason enough to pause and consider how we want to proceed.


[1] https://www.worklife.news/technology/leaders-are-blindly-ignoring-the-dangers-of-confidently-incorrect-ai-and-why-its-a-massive-problem/

[2] https://www.zdnet.com/article/chatgpt-answers-more-than-half-of-software-engineering-questions-incorrectly/

[3] https://www.scientificamerican.com/article/ai-chatbots-can-diagnose-medical-conditions-at-home-how-good-are-they/

[4] https://www.3blue1brown.com/lessons/neural-networks