top of page

Teaching Computer Science

Featured Posts

A brief explanation of AI for teachers

Writer's picture: NickyNicky

What is intelligence?


In order to understand artificial intelligence (AI), lets first look at human intelligence.


Intelligence is measured on a sliding scale. When is a human truly classed as intelligent? Is a new-born baby intelligent? It knows how to cry to get what it needs, it can suckle, burp, poo and wriggle but most of us would not class a baby as intelligent. How about a toddler, are they intelligent?


An average 2 year old can have a vocabulary of anywhere between 75 and 225 words, has taught itself how to manipulate the muscles in its body to allow it to move across a number of different terrains, pickup and move tiny objects to precise positions, solve simple puzzles and for those of us who have lived through our own children’s terrible twos they certainly know how to manipulate their parents and can set their own goals of what they want, much to the consternation of the parents.


Is this intelligence?


But if I give a toddler a complicated long division sum to solve, most of them will not have a clue what I am asking from them.


Yet computers can do this complicated calculation within a split second with ease. I am so confident the computer would have done it without an error I would use it to check my own answers. I have more faith in my computer than my own ability to perform calculations. So, can computers be classed as intelligent?


Deep Blue

What exactly is intelligence? Let’s look at Deep Blue, the computer that beat Garry Kasparov at chess. At the time it was seen as a significant leap towards artificial intelligence. Kasparov was truly a brilliant chess player, but he was more than that, he knew how to intimidate his opponents. He would unnerve them so much that they started to doubt their own abilities and would soon start making mistakes.


However, when Kasparov played Deep Blue, this method could not work on an emotionless machine. Deep Blue could not feel intimidated and so played without doubts of its own tactics so didn’t fall into the same mistakes that Kasparov’s other opponents so often did. Not only was Deep Blue programmed to be able to predict many more possible moves than Kasparov, it was also programmed to use emotional manipulation to intimidate the Russian chess master himself.


The programmers had given Deep Blue instructions to use delays, to make the machine look like it was uncertain about which move to make. Sometimes these delays would last for several minutes and gave Kasparov the impression that he was out-thinking Deep Blue. It gave him a false sense of confidence.


Kasparov playing Deep Blue
Stan Honda / Getty Images

In another game Kasparov tried to lure Deep Blue into a trap but Deep Blue worked out the plan but pretended to take the bait and then at a crucial moment, moved it’s queen out of reach, thwarting the attack and leaving Kasparov visibly shaken. Kasparov ended up losing the match and eventually the tournament, later saying he only lost because of his own poor playing rather than Deep Blue’s chess playing abilities.


Many would say that Deep Blue showed signs of intelligence but if I played Deep Blue at noughts and crosses, it would not know how to play unless the programmers entered all the rules and programmed in complex tactics that would allow it to win. Is that really intelligence? It is not learning for itself, it is relying on the programmers predicting all the decisions beforehand and it is unable to learn from it’s mistakes.


The success of its game playing depends entirely on the code that somebody else creates for it. Deep Blue is no more intelligent than a digital watch, it can not break free from its programming. It is a machine that was built for one purpose, playing chess, and it did that very well.


Turing Test

Alan Turing, the computer scientist and mathematician who cracked the German’s Enigma code in World War II, wanted to answer the question “Can computers think?” However, when he tried to quantify what thinking actually was, he realised there was no measurable precise quality that psychologists or neuroscientists could agree upon.


Instead he devised a test that would measure if a computer could trick a human into believing they were communicating with another human. This was known as the “Imitation Game”. He had not intended this test to be a measure of intelligence as such, but rather to try to understand if a computer could act in such a way as to behave like a human.



The test involved thirty judges who would each take part in ten conversations. Five of these would be with humans and five would be with machines. The judges would have a short conversation by typing into a terminal and receiving an answer through their monitor. After each conversation the judge would vote on if they thought they were communicating with a human or a machine. The test was deemed to be a pass for a machine, if more than ten judges were fooled onto thinking they were talking to a human.


Turing wrote a sample dialogue that may be produced by a machine that showed human traits. His sample dialogue went as follows:


Judge: In the first line of the sonnet which reads ‘Shall I compare thee to a summer's day', would not 'a spring day' do as well or better?


Machine: It wouldn't scan.


Judge: How about 'a winter's day'? That would scan all right.


Machine: Yes, but nobody wants to be compared to a winter's day.


Judge: Would you say Mr. Pickwick reminded you of Christmas?


Machine: In a way.


Judge: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.


Machine: I don't think you're serious. By a winter's day, one means a typical winter's day, rather than a special one like Christmas.


This shows a machine having a wide range of facts that it can draw from but also the ability to know how these are used in a human world, for instance a poem needs to “scan” to keep the rhythm of the poem, it also needs to understand that although Christmas is in winter in the northern hemisphere, it is not what most people think of when they use the term “a winter’s day”.


On 7th June 2014, in a contest marking the 60th anniversary of Alan Turing's death, a chatbot named Eugene Goostman passed Turing's test.


Chatbots are software programs that are designed to simulate human conversation and are often used on websites to help users solve common problems. Normally, they are limited in the conversations they can have and if they are unable to answer the question it will pass them onto a human for additional help.


Eugene Goostman was designed specifically for the Turing Test. It was programmed to communicate like a 13-year old boy from the Ukrain. As the judges did not expect a teenager who has English as a second language to have the same linguistic skills as a native speaking adult, they were far more forgiving of lapses in their communication skills.


Here is the transcript of Eugene’s conversation with the judges:


Judge: In the first line of the sonnet which reads ‘Shall I compare thee to a summer's day', would not 'a spring day' do as well or better?


Eugene Goostman: What makes you ask me this? Sound like you can't choose a right answer yourself! Rely on your intuition! :-) Maybe, let's talk about something else? What would you like to discuss?


Judge: I’d like to talk about poetry.


Eugene Goostman: Classics? I hope you aren't going to read "King Lear" or something like that to me right now :-)))


Judge: No, we can talk contemporary poetry. Do you like Patricia Lockwood's work?


Eugene Goostman: I am only 13, so I'm attending school so far.


Judge: Oh, that’s when I first read Auden. Do you have a favourite poem?


Eugene Goostman: No, I haven't a best-liked poem! How could you imagine such a trash! Did I forget it or you really haven't told me about the place you are from?


Judge: Does it matter?


Eugene Goostman: Maybe. I think *you* know the answer. Could you tell me what are you? I mean your profession.


A very different conversation than Alan Turing had imagined, and some would say a surprising winner of the Turing test.


Rather than a continuing stream of thought with each statement building on what previous people have said and adding to it with their own conclusions which they are willing to argue, Eugene redirects the first statement in an attempt to move the conversation away from a difficult subject.


Also, Eugene misunderstood the phrase “Patricia Lockwood's work” taking it out of context and assuming they were now talking about jobs. If the judges had not been told they were talking to a thirteen year old boy who did not speak English as their main language, they may not have been so forgiving with these points, which shows how much we can be swayed by a believable back story.


Using language in a believable way is something that computers are still struggling with, partly because they have no real understanding of the real world that we humans exist in and completely take for granted.


Enjoying this article? Don’t miss out on my future blog posts by subscribing now, it only takes a few seconds.


Is Artificial Intelligence Here?

We are surrounded by AI tools, and many of us use them in our daily work and lives, sometimes without even knowing about it. Recommendations systems like Netflix, Amazon and Spotify use AI to recommend films, products or music based on our previous interactions.


This is an example of Narrow AI (NI). The AI is learning from the massive amounts of data it is exposed to and can look for patterns in that data.  For instance, it Netflix people who liked watching Bridgerton are also likely to like Downton Abbey so if you have watched Bridgerton and are asking for a recommendation then you may also be interested in watching the goings on of the Crawley family and their multitude of staff as Downton Abbey.


Narrow AI has a specific goal that it works as perfecting. It can not break free from its boundaries, much like Deep Blue was unable to play noughts and crosses.


Other examples of NI that hare common place are virtual assistants, email filtering, navigation apps and large language models like ChatGPT.


These all learn by trial and error and repetition. Lots of repetition. It tests the system and each time it fails it learns a little bit more about what works and what doesn’t work. Once it finds one solution, it tries again, attempting other options to find another more efficient solutions until eventually it finds the best solution possible.


Watch this video to see how programmers created a simulated hide and seek game with only a few rules about the physical world these figures found themselves in and a simple goal for each team. The red team needs to find the blue team members and catch sight of them and the blue team have to try and evade being seen for as long as possible. The video shows the way in which the program learnt the rules, perfected the rules and then pushed the limits of the program further than the original programmers had envisioned to create more efficient solutions.



So narrow AI is certainly here.


But is that true Artificial Intelligence?

True intelligence would require a machine to be able to set its own goals, much like a toddler can decide what it wants. AI that has an ambition to do something without being told of the goals it is working towards is called “Artificial General Intelligence” (AGI).


AGI is the concept of a machine having the ability to understand, learn and apply knowledge in a way that is comparable to a human being. That would require the emotion of desire and that has not yet been discovered or developed.


As of now, AGI is still a theoretical concept and has not been achieved. Researchers and scientists are working towards it, but the technology is not yet available. The development of AGI poses significant technical, ethical and philosophical challenges and it remains a topic of active research and debate.

Even though experts can’t agree on when that will happen, they all agree that Artificial General Intelligence will happen at some point. Machines will develop their own goals and then find the most efficient solution to get there.


Narrow AI can outperform humans at the specific goal it has been set such as predicting financial market trends, driverless cars etc. but Artificial General Intelligence would outperform humans in nearly every task it attempts.


Maybe a better question to ask is, what will become of humans if we manage to create a machine that can set its own goals and has an intelligence that is superior to our own? How will we contain it, surly everything we can conceive of using to contain and control AGI, it would already have predicted and developed a method to break out.


The development of Artificial General Intelligence brings with it a range of concerns that are both technical and ethical in nature. Here are some of the primary concerns:

  1. Control and Safety: Ensuring that AGI systems behave in ways that are beneficial and safe for humans. There's a fear that AGI might act unpredictably or even harmfully if not properly controlled.

  2. Ethical Considerations: The potential for AGI to make decisions that could have significant ethical implications, such as in healthcare, law enforcement, and warfare.

  3. Employment Impact: The possibility that AGI could automate a wide range of jobs, leading to significant economic disruption and unemployment.

  4. Bias and Fairness: Ensuring that AGI systems are free from biases that could lead to unfair treatment of individuals or groups.

  5. Privacy Concerns: The potential for AGI to collect, analyse and misuse vast amounts of personal data.

  6. Existential Risk: Some theorists worry about the long-term risk that AGI could surpass human intelligence and act in ways that are not aligned with human values and interests.

  7. Regulation and Governance: The challenge of creating appropriate regulatory frameworks to oversee the development and deployment of AGI.


Many AI experts at the 2015 Puerto Rico Conference predict AGI will be here this century. Therefore, as creating safeguards may take many decades, safety research is being developed now alongside the development of AI itself.


Thankfully this is an area that many researchers and experts are concerned about and are activity currently working on, so when AGI is launched into the world, we are prepared.


Addressing the concerns surrounding AGI involves a multifaceted approach, with efforts being made across various areas to ensure the safe and ethical development of AGI. Here are some key initiatives and strategies:

  1. Research and Collaboration: Organisations and research institutions, such as OpenAI (owned by Elon Musk), DeepMind and academic universities, are actively researching AGI safety and ethics. Collaborative efforts, like the Partnership on AI, bring together tech companies, researchers and non-profits to address these challenges collectively.

  2. Ethical Guidelines and Frameworks: The development of ethical guidelines and frameworks to guide the responsible design and implementation of AI systems. This includes principles like transparency, accountability and fairness.

  3. Regulation and Policy: Governments and international bodies are working on policies and regulations to oversee AI development. This includes initiatives like the European Union's AI Act, which aims to create a legal framework for AI use.

  4. Bias Mitigation: Researchers are developing techniques to identify and reduce biases in AI systems, ensuring that AGI operates fairly across different demographic groups.

  5. Public Engagement and Education: Increasing awareness and understanding of AI technologies among the public to foster informed discussions about their implications and governance.

  6. Interdisciplinary Approaches: Combining insights from fields like computer science, ethics, law, sociology, psychology, economics and political science to address the complex challenges posed by AGI.

  7. Safety Research: Focusing on AI alignment and control problems to ensure that AGI systems' goals and actions are aligned with human values.


Artificial Intelligence has become an integral part of our daily lives, with Narrow AI systems like recommendation engines, virtual assistants and navigation apps surrounding us. These systems excel at specific tasks by learning from vast amounts of data and refining their performance through trial and error.


However, the concept of Artificial General Intelligence remains theoretical, envisioning machines that can set their own goals and possess human-like understanding and adaptability.


While AGI could potentially outperform humans in various tasks, its development raises significant technical, ethical, and existential concerns. As experts anticipate AGI's arrival within this century, ongoing safety research is crucial to prepare for its potential impact on society.


The question remains: how will we manage to coexist with a machine intelligence that surpasses our own?

617 views

ความคิดเห็น


Recent Posts

Archive

Search By Tags

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
  • Pinterest Social Icon
  • Facebook
  • Twitter
  • YouTube
  • Instagram
  • LinkedIn
  • Pinterest
  • Amazon

24 Crow Hill Lane, Great Cambourne, Cambridge, CB23 5AW, UK    
+44 (0)1954 205263

email me

Teach Secondary Awards

© 2012 – 2024 Nichola Wilkin Ltd.

All Rights Reserved.

bottom of page