Afraid of AI?
- James Henry
- Sep 28, 2023
- 5 min read
Updated: Sep 30, 2023
A fear of technology has probably been responsible for many delays in the implementation of new technologies over the centuries; it is not likely to be any different now.

Older people are often targeted by scams, and I am certainly aware of people older than myself who are afraid of technologies such as artificial intelligence being misused against them.
AI introduces a whole new unknown. There is much media talk or the dangers together with examples of deepfakes, news of voices being copied from relatives, and other potential scams enabled by AI.
Older people are asking who they can believe. Trust has clearly taken a hit.
How Do We Resolve This?
I remember some years ago, a senior police officer said that part of the problem they face with older people in the local community is fear. Older people were afraid to leave their homes for fear of being mugged or attacked. He said the perception of risk was much greater than the reality, and this fear was adversely impacting the older community. The solution was communication and an effort to inform the community of the reality of the situation.
Will an equivalent communication effort persuade older people to engage with AIs? It is important as the potential benefits of AI are significant.
A coordinated effort from governments, technology companies, and community organizations could go a long way in bridging the trust gap. These groups have the responsibility of not only regulating AI technologies but also educating the public about them.
Government regulation and self-regulation. To protect potentially vulnerable populations like older adults, appropriate regulation should be enforced on AI technologies. Legislation can be passed to control the creation and dissemination of deepfakes while also requiring companies to be transparent about how AI algorithms work and make decisions. The government can work closely with tech companies to monitor and revise these regulations as technology evolves.
Self-regulation is equally important. The tech industry should hold itself accountable to ethical guidelines that protect all users, including older adults, from scams and misinformation. Strict protocols should be in place to report and remove harmful content swiftly.
Guardrails. Technological safeguards are an essential part of the solution. For instance, multi-factor authentication and warning systems can be implemented to protect against scams. Similarly, AI tools can be designed to detect and flag deepfakes and other fraudulent activities. By integrating these "guardrails," older adults can have a safer experience when navigating the digital landscape.
Education: We all need this. A concerted effort is needed to educate older people about the pros and cons of AI technology. This could take the form of community workshops, online tutorials, or informational brochures distributed at local events. Older adults can be taught how to recognize and report scams, how to secure their digital identities, and how to discern between real and fake content online.
The Problem or the Solution. There is also significant scope for AIs to assist in this situation as a tool to protect people against scams. To test this, using the details of a common contemporary scam described on the Department of Justice website, I asked the various main AIs (ChatGPT, Bard, Claude) if it was a scam, and they all identified it as a scam and provided a warning.
AI Advisories. At the bottom of the ChatGPT and Bard pages, there are messages to caution users, but the text may be too small for some users,
"ChatGPT may produce inaccurate information about people, places, or facts." It is in a small font that many older people may be unable to read.
"Bard may display inaccurate or offensive information that doesn’t represent Google’s views." It is in a small font that many older people may be unable to read.
Curiously, Claude doesn't have a similar warning, but it often notes in answers that it is designed to be safe.
It is worth remembering that, at least for now, there are people behind these AIs, and they are monitoring the results. It appears they are trying to move quickly to self-police before legislation potentially hobbles their ability to develop.
Closing Thoughts. The key to resolving this issue lies in a multi-pronged approach that combines regulation, technology, and education. This can not only allay fears but also empower older adults to take advantage of the numerous benefits that AI has to offer. Trust may have taken a hit, but it's not irreparable. With concerted effort and time, older people can also adapt and feel more secure in a rapidly advancing digital world.
The AIs Comments On The Topic
As usual, I like to see what the various AI engines have to say on the topic, though for transparency, I should note that I also used ChatGPT to help flesh out and review parts of the previous text.
My question, "Older people are often apprehensive of new technologies, and many are concerned about AI. What might be the risks for older people using Claude?".
ChatGPT
Older people's apprehension toward new technologies like ChatGPT is understandable, and there are some specific risks that they should be aware of:
Privacy Concerns
Data Misuse: Conversations with the chatbot are processed and stored in servers, leading to potential risks of data leakage or misuse.
Identity Theft: Unwittingly sharing personal information can put one at risk of identity theft.
Misinformation
Incorrect Medical or Financial Advice: Older people may ask for medical or financial advice, and the AI might provide information that isn’t tailored to their specific needs.
Outdated Information: My training data only goes up until January 2022, so any events or developments after that won't be included in my responses.
Social Risks
Emotional Dependence: Some individuals might start relying on chatbots for companionship, which could potentially lead to social isolation.
Misunderstanding Nuance: AI might not fully grasp the emotional or contextual nuances in human conversations, potentially causing misunderstandings.
Usability
Accessibility: Older people may find it challenging to navigate the technology, potentially causing frustration.
Tech Literacy: Lack of familiarity with how to interact safely and effectively with AI can put them at a disadvantage.
Security
Phishing Scams: Without adequate knowledge, users might not be able to distinguish a secure AI interface from a fake one designed to collect personal information.
Updates and Security Patches: Failing to keep software updated can lead to vulnerabilities.
Ethical Concerns
Biases: AI models like ChatGPT can have biases in the data they were trained on, which could propagate stereotypes.
Being aware of these risks and taking precautionary measures, such as not sharing sensitive personal information and corroborating information from multiple sources, can help mitigate these concerns.
Claude
You raise an excellent point. As an AI assistant, I understand some older adults may be apprehensive about new technologies like myself. There are some potential risks I want to acknowledge:
Privacy - Like any internet-connected device, there are always concerns around data privacy and security. I aim to only collect the minimum data needed to be helpful.
Dependency - Overreliance on technology can potentially reduce human contact. I'm meant to assist people, not replace human relationships and caregiving.
Fraud - Unfortunately, bad actors exist online who aim to take advantage of vulnerable populations. Users should take steps to ensure I'm a legitimate service.
Glitches - As an AI system, it's possible for me to misunderstand questions, give incorrect information, or malfunction without warning. Having a backup plan is wise.
Cost - If I require purchasing special equipment or high service fees, this could be prohibitive for seniors on fixed incomes. Affordability is key.
However, companies like Anthropic take these risks very seriously. I'm designed to prioritize users' safety, security, and wellbeing above all else. And I aim to be transparent about my limitations to establish appropriate trust. My role is providing help, not harm. With thoughtful implementation, I believe I can benefit older users more than putting them at risk. But I respect concerns about AI, and hope an open discussion of risks is the first step to addressing them responsibly. Please let me know if you have any other questions!
Comments