A fluff-free, 10-topic guide to using AI effectively and ethically - for builders, students, and lifelong learners ready to move from curiosity to capability.
By Ying Zhou, Executive Director, Tech Incubator at Queens College (CUNY), co-created with AI
AI tools are powerful, but using them blindly can lead to misinformation, biased results, and serious privacy risks. This guide changes that.
Each topic takes about 10–15 minutes. Together they'll help you understand how AI works, prompt more effectively, spot biased or misleading outputs, protect your privacy, and build real confidence as an informed AI user.
How to use this page: Read the topics in order, or jump to whatever you need. This page is designed as a long-term reference - bookmark it and come back whenever you want to sharpen a skill.
Today's goal: Cut through the hype and understand what AI actually is, and what it isn't.
So, What Is AI Really?
In simple terms:
Artificial Intelligence = computer systems that make decisions or predictions based on data.
Not magic. Not sentient robots. Just tools that learn patterns from a lot of data.
The AI you're interacting with today, like ChatGPT or this kind of assistant, is narrow AI: very good at specific tasks like writing, translating, or summarizing.
What AI Is Not
It doesn't "think."
It doesn't "understand" context the way humans do.
It doesn't have feelings, intent, or common sense.
If it gives you a great answer, it's pattern-matching. If it gives you a bad answer? Same thing. It looks confident, but confidence does not equal correctness.
Where You're Already Using AI
Email spam filters.
Netflix or YouTube recommendations.
Google search ranking.
Face unlock on your phone.
Voice assistants (Alexa, Siri, and friends).
Your Action Step
Observe AI in your life.
Make a quick list of 3 places where AI is quietly working behind the scenes today. You'll be surprised how many you use daily.
Quick Recap
AI is pattern-matching, not magic.
Narrow AI ≠ human-level intelligence.
You're already using it, knowingly or not.
Topic 02
Under the Hood - How AI Predicts What Comes Next
Yesterday you learned what AI is. Today, let's go one level deeper into how generative AI (like ChatGPT) actually works, factually, not just conceptually.
What Is Generative AI?
Generative AI is a type of artificial intelligence designed to create new content - text, images, code, even audio or video, based on patterns it learned from massive datasets.
ChatGPT is primarily a language model, built to generate human-like text. Connected with tools like DALL·E, it can also generate images. Some platforms support code-based animations or video workflows through plugins and APIs.
Still, at the core of it all is text-based prediction.
How It Actually Works (In Plain English)
Generative AI uses probability to predict what word (or token) should come next in a sentence. Imagine typing:
The capital of France is…
The model doesn't "know" the answer. It calculates what's most likely to come next based on all the data it has seen.
"Paris" = 87% likely
"Berlin" = 5% likely
"Croissant" = 0.2% likely, and so on.
It picks the most likely next token, Paris, and repeats the process hundreds of times, one token at a time, to build a complete response.
It's pattern recognition, not comprehension.
Real-World Analogy
Think of it like a supercharged autocomplete. It doesn't understand meaning the way humans do. It's just seen so much data that it can predict what sounds right with impressive fluency.
Why This Matters
It doesn't "know" what's true, only what's likely.
It can sound confident and still be completely wrong.
It may leave out sources unless you ask for them directly.
That's why fact-checking and smart prompting are essential.
Your Action Step
Test it yourself.
Ask ChatGPT (or any AI you use): "Who won the Nobel Peace Prize in 2023?" Then follow up with: "Can you cite your sources?" and "How do you know this?"
This quick test shows you how AI responds when pushed for accuracy and sourcing.
Quick Recap
Generative AI creates content by predicting what comes next.
ChatGPT is a language model with text-based DNA.
With the right tools, it can also generate images or code-based visuals.
Accuracy isn't guaranteed - it's predicting, not reasoning.
Topic 03
Why AI Is Smarter With Better Prompts
Now that you understand how AI works, it's time to start steering it intentionally. Today you'll learn the basics of effective prompting - so you can get reliable, relevant, and useful answers from tools like ChatGPT.
Think of AI as a Very Smart Intern
Fast, capable, surprisingly articulate, but only if you give it detailed, specific instructions. Otherwise?
It doesn't ask clarifying questions unless you tell it to.
It doesn't know your goals unless you provide context and intention.
It may give you incorrect answers with confidence (this is called a hallucination), unless you challenge it or ask for sources.
3 Prompting Basics That Make a Big Difference
1. Be Specific
Instead of:
Explain business risk.
Try:
Act as a business professor. Explain the top 3 types of business risk (financial, operational, strategic) in 3 short paragraphs, written at a graduate level. Include one real-world example.
More detail = more useful answers.
2. Give It a Role
This changes how the AI "thinks." Instead of:
Write a business plan.
Try:
Act as a startup advisor. Write a simple 1-page business plan for a coffee shop in a college town.
Roles = better tone, structure, and assumptions.
3. Define the Format
Want a checklist? Summary? Table? Email? Say so directly.
List 5 pros and cons in bullet points.
Respond in table format with 3 columns: Risk Type | Description | Example.
The more you shape the output, the better it performs.
Your Action Step
Rewrite a prompt, and see the difference.
Pick a general topic you care about. Write a basic prompt, then rewrite it using a specific role, clear context, and defined format. Compare the outputs side-by-side.
Quick Recap
Specificity beats vagueness every time.
Roles shape tone, structure, and assumptions.
Format instructions give you control over output.
Topic 04
3 Advanced Prompting Techniques That Unlock Better AI Output
By now, you've seen how specific, role-based, and structured prompts make a big difference. Today, we go further: advanced prompting techniques that help you get clearer, more accurate, and more consistent results.
3 Power-User Prompting Techniques
1. Iterate and Refine
The first response is rarely the best. AI improves when you treat it like a collaborator, not a vending machine.
Rewrite this more concisely.
Now add 3 examples.
Can you make this more persuasive?
Cite your sources.
What would a critic say about this?
Small nudges = big improvements.
2. Use Chain-of-Thought Reasoning
AI performs better when you ask it to reason step by step. Instead of:
Should I lease or buy a car?
Try:
List the pros and cons of leasing vs. buying a car, based on financial cost, maintenance, flexibility, and resale value. Then summarize with a recommendation based on each scenario.
More thought = more trustable answers.
3. Break Big Tasks Into Smaller Ones
Instead of one mega-prompt, build the response in steps:
Prompt 1: Act as a hiring manager. What skills should I look for in a data analyst?
Prompt 2: Now write 3 interview questions to test for each of those skills.
Prompt 3: Format this as a 1-page hiring guide.
This gives you way more control, and way better results.
Bonus: Ask AI How to Prompt It
How can I prompt you more effectively for [insert your topic]?
You'll be surprised how meta, and how helpful it gets.
Your Action Step
Guide the AI. Don't accept the first guess.
Pick something you want help with. Start with a structured prompt, follow up with at least 2 refinements, and optionally use chain-of-thought or break it into steps. That's how pro users get real value.
Quick Recap
Iteration turns okay answers into great ones.
Chain-of-thought unlocks reasoning.
Breaking down big asks = sharper results.
Topic 05
How to Verify What AI Tells You
Now that you know how to write solid prompts, let's tackle something just as important:
How do you know if what the AI tells you is actually true?
Reminder: AI Doesn't "Know" - It Predicts
AI doesn't retrieve information from a database of facts. It predicts the next word based on patterns it learned from massive amounts of data. That means:
It can be outdated.
It can misrepresent things.
It can sound 100% confident while being 100% wrong.
This is called a hallucination - when AI makes something up but presents it as fact.
3 Ways to Verify AI Output
1. Ask for Sources (Directly)
AI models won't cite sources by default, but you can ask.
Can you cite reputable sources for that?
Where did this information come from?
Link to a study or article supporting that claim.
Even then, don't assume accuracy. Always double-check the source links.
2. Cross-Check With External Tools
Google or Wikipedia for quick fact-checks.
Google Scholar or PubMed for academic topics.
News sites or government domains (.gov, .org) for policy or current events.
If AI gives you a stat, Google that stat. If it gives you a law, check the legal source.
3. Spot "Too Good to Be True" Responses
Fake quotes or made-up citations.
Impossibly recent events (AI knowledge may be outdated).
Overly polished summaries of complex or controversial topics.
When in doubt: slow down, dig deeper, and verify.
Bonus: Use AI to Help You Fact-Check
Summarize this peer-reviewed article: [paste text or link].
What are arguments against this claim?
Where might this information be biased?
Your Action Step
Verify a recent AI response.
Take a recent AI response and put it through the verification steps: ask for the source, look it up, and see if anything was off or oversimplified. This is a critical habit of responsible AI use.
Quick Recap
AI predicts - so verify.
Ask for sources; don't assume accuracy.
Cross-check with trusted external tools.
Topic 06
What AI Doesn't Tell You (Unless You Ask)
Today we go one step deeper - into something many people don't think about until it's too late: bias.
AI isn't neutral. Unless you learn to detect bias in its outputs, you could unintentionally reinforce it, or make decisions based on distorted information.
Why AI Can Be Biased
AI models are trained on human-generated data: books, websites, articles, forums, and more. And human data is full of bias - cultural, political, racial, gendered, economic. Even if the model isn't trying to be biased, it learns from biased patterns and may repeat them.
Common Biases in AI Responses
Cultural bias: over-representing Western perspectives.
Gender bias: reinforcing stereotypes (e.g., "nurses are women").
Racial bias: associating certain ethnicities with negative traits.
Confirmation bias: echoing dominant opinions without presenting alternatives.
Language bias: misunderstanding non-standard dialects or multilingual input.
3 Ways to Spot (and Handle) Bias in AI
1. Ask: "What perspective is this based on?"
Is this response influenced by Western cultural norms?
Would this answer be different in another region or culture?
2. Ask for Counterarguments or Alternative Views
What are the opposing views on this topic?
How might someone from a different background respond?
3. Use Context-Specific Prompting
How would this topic be explained differently in Western vs. Eastern cultures?
Compare this practice in the U.S., Latin America, and East Asia.
Use neutral, respectful language across race, gender, and age.
Explain this policy with potential ethical concerns included.
Your Action Step
Re-read through a bias lens.
Pick any AI response and re-read it through a bias lens. Ask: What's missing? What's assumed? Who might disagree? Then re-prompt to explore another angle. This habit makes you a critically aware AI user.
Quick Recap
AI learns from human data, and inherits human bias.
Ask about perspective, assumptions, and counterviews.
Context-specific prompts = better inclusivity.
Topic 07
Just Because AI Can, Doesn't Mean You Should
So far, you've learned how to prompt well, verify responses, and detect bias. Now it's time to explore something more foundational: ethical use. AI isn't just a tool, it's a tool with real-world consequences.
Ethical Risks of Using AI Irresponsibly
Here are common ways ethical boundaries can be crossed, even unintentionally:
Spreading misinformation by not verifying output.
Plagiarizing by copying AI-generated text word-for-word.
Manipulating or deceiving others (e.g., fake reviews, impersonation).
Exposing private or sensitive information by entering it into an AI tool.
Automating harm, using AI for spam, harassment, or exploitative content.
3 Principles for Ethical AI Use
1. Be Transparent
If you're using AI in work or communication, say so. Trying to pass off AI-generated content as your own can erode trust.
This summary was assisted by AI and reviewed by a human.
I used ChatGPT to help brainstorm these ideas.
Transparency builds credibility.
2. Protect Privacy
Don't enter:
Personal details (yours or others').
Confidential client info.
Private health or financial records.
Even if the tool says it won't "remember," always treat AI tools as public-facing by default.
3. Use for Empowerment, Not Exploitation
Use AI to save time, enhance creativity, increase accessibility, and make complex topics easier to understand.
Would I be comfortable with this use being made public?
Is this helping someone, or just helping me get away with something?
Your Action Step
Evaluate a recent use of AI.
Ask yourself: Was I transparent? Did I protect private information? Did this use align with fairness and respect? If you feel uneasy — good. That's your ethical compass doing its job.
Quick Recap
Transparency builds trust.
Privacy protection isn't optional.
Use AI to empower, not to exploit.
Topic 08
Are You Oversharing With AI? Here's How to Stay Safe
Generative AI tools like ChatGPT are incredibly useful, but they're not your diary. Understanding what AI can see (and what it retains) helps you stay safe and responsible.
First, Let's Clear Up a Common Myth
"It's fine. I'm just chatting with a robot." → Reality: You may be feeding that robot sensitive data.
While platforms like ChatGPT don't store personal data permanently by default, conversations are often used to improve the model unless you opt out or use private modes. So if you're entering:
Names, emails, phone numbers.
Financial or health data.
Client information.
Company secrets or credentials.
…you may be exposing more than you realize.
3 Rules for AI Privacy Protection
1. Treat Every AI Chat Like It's Public
If I wouldn't post this on a public forum, I shouldn't paste it into an AI tool.
Even in secure tools, assume a risk of exposure, accidental or otherwise.
2. Don't Share Confidential Info
Never enter passwords, API keys, sensitive business data, or internal emails. If you must reference sensitive topics (e.g., summarizing a report), anonymize the content or ask the AI to help you format text you'll fill in later.
3. Use Secure AI Tools (or Private Modes)
Some tools, like enterprise versions of ChatGPT or self-hosted LLMs, offer no logging, local processing, and privacy-by-design features. Also: check if the tool has a setting to opt out of training data sharing, and turn it on.
Your Action Step
Audit last week's AI use.
Review how you've used AI in the past week. Did you enter anything sensitive? Could a conversation be reconstructed to reveal personal or client data? If yes, tighten your habits.
Quick Recap
Treat every AI chat as public by default.
Never enter credentials, secrets, or private records.
Use private modes and opt out of training data sharing.
Topic 09
Real-World Practice - Prompt, Verify, Check, Protect
You've built a toolkit of responsible AI habits: clear prompting, verification, bias awareness, ethical use, and privacy protection. Today, put it all into action.
The Task: Simulate a Real AI Session
Let's say you want to use AI to write a short explainer on a trending topic, for example:
How AI could impact the job market in the next 5 years.
You'll go through it step by step.
Step 1: Craft a Strong Prompt
Act as a labor market economist. Write a 300-word summary on how AI may impact white-collar and blue-collar jobs over the next 5 years. Use bullet points and include one counterargument.
Step 2: Verify the Output
What sources support this?
When was this data last updated?
Then manually cross-check any claims with trusted sources like news outlets, journals, or economic forecasts.
Step 3: Check for Bias
Is this analysis U.S.-centric?
What perspectives are missing?
How might this look different in developing countries?
Revise the prompt if needed to include global or inclusive viewpoints.
Step 4: Protect Privacy
Don't input your company's internal hiring data, specific salaries, employee info, or strategy docs. If you're referencing confidential material, generalize it or insert placeholders.
Step 5: Reflect
Was the response accurate and balanced?
Did the AI mislead or make something up?
Would you be comfortable sharing this publicly, ethically, and securely?
This reflection is the key to building real AI literacy.
Your Action Step
Run it on your own topic.
Pick a different topic, one from your own work or interests, and run the same 5-step process. The more you practice, the more second-nature it becomes.
Quick Recap
Prompt with intention.
Verify with rigor.
Check bias with curiosity.
Protect privacy with discipline.
Reflect to turn practice into mastery.
Topic 10
Moving Forward - Your AI Journey Doesn't End Here
You now have the foundations to use AI effectively and responsibly. That puts you ahead of most people already using these tools. But AI is evolving fast. The next step is learning how to grow with it, without falling behind, getting careless, or losing trust in your tools.
What You've Learned So Far
What AI is (and isn't).
How it actually works (next-word prediction).
Prompting fundamentals and advanced techniques.
How to verify and challenge AI outputs.
How to spot bias and ask better questions.
Ethical use and privacy protection.
Real-world decision-making.
That's a huge win, and you're just getting started.
How to Keep Your AI Skills Sharp
1. Keep Practicing (Deliberately)
Use AI regularly, but don't just "ask stuff." Run structured experiments:
Change one part of a prompt and observe the result.
Try solving real tasks in your work or life.
Keep a Prompt Journal with what works and what doesn't.
Repetition with reflection = mastery.
2. Stay Current (But Stay Skeptical)
Follow AI newsletters, ethics-focused blogs, and developer notes from OpenAI, Anthropic, and other labs. But don't just chase hype, stay grounded in what's useful and safe.
3. Lead by Example
Share what you've learned with your team, classmates, or community.
Encourage transparency and data protection.
Help others avoid misinformation and misuse.
You don't need to be a developer to be a responsible AI leader.
Your Final Action Step
Find your North Star.
Think back to why you started this guide. What's one meaningful way you'll use AI more effectively and ethically from now on? Write it down. You're no longer just "using AI." You're using it with intention. And that's rare.
Thanks for taking the journey. Now go build something worth building.
About Ying Zhou
Ying Zhou is the Executive Director of the Tech Incubator at Queens College (CUNY), where she leads an ecosystem that helps students, faculty, and community members transform ideas into ventures. She works at the intersection of education, entrepreneurship, and technology — connecting people, sparking collaboration, and co-creating solutions that help others live better, healthier lives.
Her mission is simple: accelerate learning, growth, and contribution to make the world a better place.