This activity aims to explain how AI can be used responsibly and effectively.
Click on each heading to find out more.
Artificial Intelligence (AI) refers to computer systems that can learn from data and use that learning to make decisions, predictions, or classifications. Instead of following a fixed set of instructions, AI systems look for patterns and adjust their behaviour based on what they've seen before.
You already use AI every day, often without noticing. Examples include:
These systems aren't "thinking" like humans - they're using large amounts of data to make educated guesses.
It's also important to remember that AI is a broad field. Chatbots like ChatGPT are just one type of AI. Many AI systems don't generate text at all; they classify images, detect fraud, translate languages, or help doctors analyse scans.
Because chatbots (LLMs) are the most visible form of AI now, many people assume that AI and LLMs are the same thing - but LLMs are just one type of AI.
Large Language Models (LLMs) are programs designed to mimic human language so they can interact with people in a natural, conversational way. They are an evolution of earlier chatbots, leading to systems such as ChatGPT, Copilot, Claude, and Grok.
LLMs work by analysing patterns in huge amounts of text. During training, they are shown billions of examples of sentences, conversations, articles, and books - most of it collected from the internet. From this, they learn statistical patterns: which words tend to appear together, how sentences are structured, and how ideas are usually expressed.
It should be noted that LLMs do not store these sources of training but learns patterns from them.
Modern LLMs use a type of neural network called a transformer architecture, which allows them to process text efficiently and generate responses that feel coherent and context aware.
As LLMs do not understand meaning, they can produce confident-sounding answers that are wrong.
It is important to note that LLMs do not "understand" language in the same way humans do. They do not have beliefs, opinions, or awareness - they simply predict the most likely next word based on the patterns they have learned.
They work like playing an endless game of 'guess the next word' based on everything it has seen before.
One concern with using AI for schoolwork is that it can become tempting to let the tool do the thinking for you. If you rely on an AI to write your essays, solve your problems, or explain every idea, you miss out on practising the skills you need to develop - like structuring arguments, analysing information, or working through a tricky question yourself.
Over time, this can make it harder to learn independently. You might get the right answer in the moment, but you lose the chance to build confidence in your own abilities.
AI can be a useful support, but it shouldn't replace the process of learning. The goal is to use it to help you think, not to think instead of you.
Let's look at some options for how LLMs may be used to assist with a homework assignment.
Pros
Cons
Pros
Cons
Pros
Cons
Pros
Cons
Climate impact is a major concern for both users and developers. As people become more aware of their carbon footprint and water usage, questions are being raised about the environmental cost of running large AI systems.
We spent some time trying to find reliable figures for CO2 emissions and water consumption associated with AI. There are many different estimates available, and they vary widely depending on the source, the model, and the assumptions used. This makes it difficult to know which numbers to trust.
ChatGPT and its CEO, Sam Altman, have published figures for their system, and we have chosen to use these. Although these values are likely to be conservative and minimise the environmental cost, they still allow us to understand the scale of the impact. Even with these lower-bound estimates, the results are striking.
Based on the published data, the daily CO2 emissions from ChatGPT are around 255,000 kg. Offsetting this would require more than four million trees. The system also consumes approximately 241,320 litres of water per day.
These calculations use average values. The exact environmental cost depends on the complexity of the user requests: short text queries use less energy and water, while tasks such as image generation require significantly more.
These figures only reflect the daily running costs of an already-trained model. They do not include the environmental impact of developing the system in the first place. Training a large AI model requires enormous amounts of computing power, which means far higher energy use, greater water consumption, and significant hardware demands. The development phase is usually far more climate-intensive than day-to-day usage, but because companies rarely publish full training data, the true cost is difficult to measure
It should also be recognised that the data used in this session is very time specific (2025/26) and that there are hopes that as the technology evolves such costs will be reduced over time.
Every time you use an AI chatbot, you're tapping into a building full of powerful computers running 24/7. That uses energy and water, just like streaming videos or gaming online. Knowing this helps you decide when AI is worth using - and when it's better to think something through yourself.
AI systems can be useful, but they also come with risks that are important to understand. These issues don't mean you should avoid AI completely - they just mean you need to use it thoughtfully.
Fake news, image, and video production has become more prevalent with these systems able to generate them for anyone. The overall effect being that we really cannot trust anything we read, see, or hear online.
This massive uptake of 'AI Slop' has flooded social media platforms with content walls filled with fictional/fake posts - which is both annoying and misleading.
Some AI tools store the prompts you type into them. Depending on the platform, this information may be used to improve the model or reviewed by humans. It's important not to share personal, sensitive, or identifying information with any AI system.
AI models can sometimes produce harmful, offensive, or inappropriate outputs, even when you don't ask for them. Companies use filters to reduce this, but they aren't perfect.
LLMs can produce answers that sound confident but are completely wrong. They don't "know" facts - they predict text based on patterns.
When a user requests references for their work, an LLM will produce things that look like references using patterns learned - often none really exist.
AI models are trained on huge amounts of text from the internet. This includes high-quality information, but also outdated, inaccurate, or biased material. The model can't tell the difference unless it has been specifically trained to.
This means the training base for a system we use to help us with research, fact-finding, and giving us answers contains flawed information. This could lead to our current LLMs bloating the internet with further inaccuracies and misinformation - so future versions will be training on even more flawed information. This would lead to even more of a drop in the reliability of the internet but also future iterations of our AI powered assistants.
Building and running large AI systems requires specialised hardware, and demand for these components has increased rapidly.
The hardware demands for the necessary data centres to store all the information used to train these systems and everything they are collecting from their users is massive. The US alone has announced the opening of a hundred new massive data storage centres.
One of the already felt impacts of this is the sudden increase in price and decrease in availability of processing and storage devices for computers. Delaying development in all other avenues of technological development - especially in domestic settings. One of the most famous examples at this time is the delay and possible scrapping of a new computer called the Steam Machine due to the inability to obtain the necessary parts and keep the costs down for the buyers.
There have been many concerns raised regarding LLMs lack of concern for using copyright protected material in its training and development.
Copyright exists to protect the content of an author/creator and to ensure no-one else profits from it.
Ignoring this means that AI systems are re-using content at the cost to the original author/creator which raises ethical and legal questions about ownership and fair use.
Companies/institutions are making more use of LLMs to help with cost-saving and time effectiveness. For example, an online AI powered customer chat system is much faster at responding to queries than a human operator.
However, the more we replace the human element in customer services and other services such as health advice/provision, the more isolated and lonely people become - especially our most vulnerable.
There is also the concern that the expectation of everyone to be able to operate with such technology leaves more of our population vulnerable.
Could you go a day without asking Google (which now incorporates its own AI assist) or any LLM something?
Becoming overly reliant on any technology is problematic. It would provide the companies offering these services an opportunity to extort its users by raising the costs of the most popular aspects. People would pay because they can no longer 'live' without it. The other issue is that if something were to happen to that service, how quickly would people be able to adjust having not developed the skills necessary to do a task without the assistance of an LLM?
Use AI to learn, not to replace learning
AI can be a helpful tool, but only if you use it in a way that supports your own thinking. Here are some simple guidelines to help you get the benefits without losing the chance to build your skills.
Start with your own ideas
Before you open an AI tool, take a moment to think about the task yourself. What do you already know? What are you trying to understand? Even a quick brainstorm helps you stay in control of the learning.Use AI to explore, not to copy
Ask AI to explain a concept, give examples, or help you see a problem from a different angle. Don't just paste the answer into your work - use it to deepen your understanding.Check everything
AI can sound confident even when it's wrong. Always cross-check facts with reliable sources, especially for science, history, and anything that needs accuracy.Keep your personal information safe
Never share your full name, address, school, or anything sensitive with an AI tool. Treat it like a public space.Make the final work your own
Use AI as a starting point, not the finished product. Put things into your own words, add your own examples, and show your own thinking. That's how you actually learn.AI tools don't just affect your homework - they affect the world around you. When you use an LLM, you're also using energy, data, and computing power. It's important to understand the wider impact so you can make informed choices.
It's important to remember that most AI systems are not LLMs - and many of them are far more efficient and specialised.
There are a large number of different AI approaches and uses other than LLMs, some of which are listed below.
All these alternative uses of AI are much less energy, storage, and memory consuming. This is due to requiring much smaller training datasets and having very specific tasks - whether that's detecting tumours in medical scans or giving you tv-show recommendations within a streaming service.
These specialised systems are cheaper, more environmentally friendly, and have a higher accuracy which allows for positive impact on everyone's lives. Yet, the issues surrounding LLMs and their constant reference to being AI overshadows these ground-breaking systems. As a result, many researchers have adapted a new term of 'machine learning' for AI approaches which are not LLMs to avoid the negative press associated.
We've already hinted above at some of the systems and applications for these machine learning approaches. This section will have a more in-depth look at some of these and their impact.
DeepMind's AlphaFold can predict the 3D shape of proteins from their amino-acid sequence. This was a massive scientific breakthrough because protein shapes are key to understanding diseases and developing new medicines.
Why it matters: It solved a problem biologists had been working on for 50 years.
Companies like Tesla, Waymo, and others use machine learning to help cars: detect pedestrians, read road signs, predict movements of other vehicles, and plan safe routes.
Why it matters: It's a real-world application of computer vision and decision-making to make road travel safer for everyone.
Various AI models have been trained to analyse different medical images to spot things like tumours, disease, or fractures.
Why it matters: It helps doctors diagnose and treat illnesses faster and more accurately.
Using machine learning trained on drone imagery of crop growth and agricultural land to help identify diseased crops, optimise irrigation, and reduce pesticide usage.
Why it matters: Supports sustainable farming and improves international food security.
There are so many more examples and applications we could here, but we believe it's important to recognise that since the introduction of these systems there have been a surge in breakthroughs in all subject areas. Even the simplest of applications of some AI models on datasets has increased our speed of processing and detecting patterns.
In 2024, two of the Nobel Prizes were both won by creators and applications of machine learning/AI tools.
Two programmers won the Nobel Prize for Physics with their development of artificial neural networks that allow computers to store, reconstruct, and autonomously find properties in data.
The DeepMind AlphaFold project mentioned in the previous section was the winner of the Nobel Prize for Chemistry for solving a 50-year-old protein folding problem, predicting structures for almost all 200 million known proteins.