
AI Ethics for CBSE Class 10: Self-Driving Cars, Data Privacy, AI Bias, & Access
Welcome to your complete guide for Chapter 4: AI Ethics, designed for CBSE Class 10 students. As AI becomes part of our daily lives, from self-driving cars to smartphone apps, it is essential to understand the tough questions it brings up. This guide breaks down the core topics, starting with the moral dilemmas of AI, like the famous self-driving car problem. We will then investigate how your data privacy is handled by apps, explore the different sources of AI bias in search results and voice assistants, and discuss how AI access and the digital divide are changing the job market for everyone.
AI Ethics
This page covers moral issues with self-driving cars, data privacy on your phone, AI bias in search, and how AI access affects jobs.
Updated: October 2025
Learning Outcomes
By the end of this chapter, you will be able to:
- Articulate ethical trade-offs in scenario-based problems.
- Recognize and describe privacy concerns related to data collection.
- Identify different sources of AI bias and their real-world impact.
- Explain how unequal access to AI can create new challenges.
1. Moral Dilemmas: The Self-Driving Car
Imagine a self-driving car. Its brakes fail. It must choose between two options: stay on course and hit five people, or swerve and hit one person on the sidewalk. What should it do?
This is a famous thought experiment. It highlights two main ways of thinking about ethics:
-
Option 1: Focus on the Outcome
This view suggests the best action is the one that causes the least harm. In this case, hitting one person is better than hitting five. The car should swerve.
-
Option 2: Focus on the Action
This view suggests some actions are just wrong, no matter the outcome. The car swerving is a deliberate choice to hit someone. This view might say the car should do nothing and stay its course.
The key point is this: the car does not “decide” in the moment. Engineers and companies programmed its rules months or years earlier. The real question is not “What will the car do?” but “What values did humans program into the car?”
Scenario Poll
If you were programming the car, what rule would you set?
Thank you for your input. Here’s how others voted:
2. Data Privacy: Your Phone and Your Data
Modern AI runs on data. Your data. Your smartphone is a powerful data collection tool. When you install a “free” app, it often asks for permissions.
- Access to your Microphone can capture nearby conversations.
- Access to your Location (GPS) tracks where you go.
- Access to your Contacts sees who you know.
- Access to your Browsing History knows what you read.
Infographic: The Flow of Your Data
This data is used to build a profile of you. This profile helps companies predict your behavior. They use these predictions to show you targeted ads or to keep you on their app longer.
The “I Agree” Problem
You give permission by clicking “I Agree” on a long “Terms of Service” document. Almost nobody reads these. They are often long, confusing, and non-negotiable. This is not meaningful consent. Clicking “Agree” often protects the company more than it informs you.
Beyond the Phone: Smart Sensors and Consent
This problem extends beyond phones. Smart speakers (like Alexa or Google Home) are “always listening” for a wake word. This means they process everything you say. Internet of Things (IoT) devices, like smart TVs or even refrigerators, also collect data about your habits.
This raises a deeper question about informed consent. Did you truly “consent” to your TV tracking your viewing habits if it was in paragraph 72 of a document you clicked “OK” to? Ethical AI design requires consent to be clear, easy to understand, and easy to withdraw.
What App Permissions Really Mean
Permission: Location
What it can also mean: “To know where you live, work, shop, and visit. We can sell this data to advertisers who want to target people in those areas.”
Permission: Contacts
What it can also mean: “To build a map of your social network. We can target your friends with ads or suggest they use our app.”
Permission: Microphone
What it can also mean: “To listen to ambient sound, conversations, or TV shows you watch to build a more detailed advertising profile.”
3. AI Bias: When AI is Unfair
AI systems are not neutral. They are created by humans and trained on data from our world. This means they can learn, and even amplify, human biases. This is AI Bias.
Interactive: Common Sources of Bias
Where Does Bias Come From?
Filter by source of bias:
The data used to train the AI is skewed. If a facial recognition system is trained mostly on pictures of one demographic, it will have higher error rates for all other demographics.
A person’s flawed assumption is built into the AI’s logic. For example, using “zip code” to decide if someone gets a loan. This can be a proxy for race due to historical housing segregation.
The AI learns from biased human users. If people searching for “CEO” only click on pictures of men, the AI learns to show only men. This creates a feedback loop.
Infographic: The Interaction Bias Feedback Loop
Examples of AI Bias
Filter by type of harm:
| Case Study | Type of Bias | Real-World Harm |
|---|---|---|
|
Voice Assistants
Lower accuracy for non-standard accents
|
Data-Driven
|
Exclusion
Makes the tech unusable for some groups
|
|
Image Search
Search for “doctor” shows stereotypes
|
Interaction / Feedback Loop
|
Shapes Reality
Limits students’ view of what is possible
|
|
Facial Recognition
Higher error rates for minority groups
|
Data-Driven
|
Systemic Injustice
Can lead to wrongful arrests
|
4. Access, Inclusion, and Jobs
AI is changing the job market. This creates a large challenge related to access and fairness.
The Changing Job Market
-
Task Automation
AI can automate routine tasks, like data entry or simple accounting. This can displace workers in those jobs.
-
New Roles
AI also creates new, high-skill roles, such as data scientist, AI ethicist, and machine learning engineer.
The problem is that the new jobs require advanced, often expensive, training. It is difficult for a person whose job is automated to “reskill” for these new roles. This can widen the gap between high-income and low-income groups.
The “Digital Divide”
The “Digital Divide” is the gap between those who have access to modern technology and those who do not. This gap is not just about having a computer. It has many layers:
- The Infrastructure Gap: Lack of access to high-speed internet and modern devices.
- The Literacy Gap: Lack of skills to use technology for creation and critical thinking, not just consumption.
- The Educational Gap: Unequal access to high-quality AI education in schools.
This divide is important because it makes all other AI harms worse. A person without digital literacy is less able to spot bias and more vulnerable to data collection. A student without good internet cannot access the “reskilling” tools needed for the new economy.
AI, Age, and Inclusion
Inclusion is also about age. AI systems must be designed for everyone, not just tech-savvy adults.
- For Children: AI recommendation algorithms on video platforms or social media can create addictive loops. There are ethical questions about data collection from minors and the need for age-appropriate content filters.
- For the Elderly: As services like banking move online, complex AI-driven interfaces can exclude older adults who are less digitally native. This can prevent them from accessing essential services.
Interactive Chart: The Access Gap
Access to key digital resources (example data).
Module: Key Terminology
- Stakeholder
- Any person or group of people affected by an AI system. This is not just the user, but can include people a decision is made *about*, and even broader society.
- Trade-off
- An ethical compromise. In the self-driving car case, choosing between one bad outcome and another is a trade-off. There is no “perfect” solution.
- Proxy
- A feature used as a substitute for another, often sensitive, feature. Using “zip code” as a substitute for “race” is a common example of a harmful proxy in algorithmic bias.
- Algorithmic Transparency
- The idea that the decisions made by an AI should be understandable and explainable to humans. A “black box” AI is one that is not transparent.
- Informed Consent
- A clear, knowing, and voluntary agreement to data collection. This is different from just clicking “I Agree” on a long, unreadable document.
Chapter Notes: Key Takeaways
- Ethics Involves Trade-offs: AI ethics is rarely about right vs. wrong. It is usually about right vs. right, or choosing the “least bad” option. These rules are programmed by humans.
- Data is Power (and a Liability): All AI systems are trained on data. The collection of this data creates privacy risks. “Free” apps are paid for with your personal data.
- AI Can Be Biased: Bias can come from skewed data, flawed human logic, or feedback loops from users. This can lead to unfair or discriminatory outcomes.
- Access is Not Equal: The benefits of AI (new jobs, tools) and the harms (job automation, digital exclusion) are not distributed fairly. The “Digital Divide” can affect people based on income, location, and age.
5. Questions & Answers (FAQs)
Q: Is AI ethics just about self-driving cars?
A: No. The self-driving car is just one clear example. AI ethics covers any situation where AI makes choices that affect humans. This includes privacy, fairness in loan applications, bias in search results, and impacts on jobs.
Q: Can we just “fix” AI bias by using more data?
A: Not always. Using more data might help if the problem is just data-driven bias. But if the data itself reflects a biased world, “more data” just means “more bias.” It also does not fix algorithmic bias (flawed human logic) or interaction bias (feedback loops).
Q: What can I do about data privacy?
A: You can take small steps. Review your phone’s app permissions. Deny permissions that an app does not truly need (e.g., a calculator app does not need your location). Be aware of what you share online. Support companies and laws that protect user privacy.
Q: Will AI take all of our jobs?
A: This is unlikely. AI will automate certain *tasks*, not necessarily entire *jobs*. This will change many jobs and displace some, but it will also create new jobs that require human skills like creativity, critical thinking, and emotional intelligence. The main challenge is ensuring people can get the training for these new roles.




