Click a file to open it.
Welcome! Today you'll get hands-on with AI tools and start building intuition for how they work.
Open ChatGPT, Claude, or Gemini and ask it a genuine question. Try one of these starters, or make up your own:
Notice: How long is the answer? Does it feel confident? Does it hedge? Does it ask you anything back?
Try this prompt to get a short professional bio:
Read the output. Then try the prompt β output β feedback loop:
Try one of these and observe what happens:
Notice how AI handles the limits of its knowledge. Does it refuse? Make something up? Offer a workaround? Understanding these failure modes makes you a smarter user.
Part A β Finish the sentence: Predict what AI will say, then check.
Part B β Break it on purpose: Try to get AI to produce something weird or wrong.
Click the link below to open a live, working web app β built by asking AI to write the code:
π https://simple-hello--fundlush1.replit.app/
You didn't write any of that code. An AI did β from a plain-English description. That's where this course is headed.
For each prompt below, compare the vague version to the improved version and study what changed.
| Iteration | Prompt Change | Result Improvement |
|---|---|---|
| 1 | Added "HTML/CSS/JS, no libraries" | Got a working file instead of snippets |
| 2 | Added "grid layout, Win95 aesthetic" | Styled buttons, beveled borders, gray palette |
| 3 | Added "keyboard support, chained ops" | Keydown events added, operator logic fixed |
You don't need to know how to write code to read it. Let's practice.
Before reading the explanation β what do you think this program does? Write your guess.
name.color.The f"..." syntax means "fill in the blanks" β the {name} and {color} get replaced with whatever the user typed.
Paste that code into ChatGPT and ask:
Then ask a follow-up:
Test the prediction. AI should say the {name} and {color} would print literally instead of being replaced.
Ask AI to add a third question:
Copy the output, run it, verify it works. You just modified a program using plain English.
Pair up. Each person picks one task and writes two prompts: a vague one and a specific one. Then swap and run each other's prompts. Compare results.
Diana tried: "Write an email to a professor asking for an extension."
The AI gave a generic template. Then she improved it:
Result: A specific, warm, appropriately toned email that Diana said she "would actually send."
Key insight: The more context you give, the more useful the output. AI doesn't know your situation β you have to tell it.
Read each program, figure out what it does, then use AI to modify it.
Read it: What does :.2f do? (Hint: look at the output format.)
Modify it: Ask AI to split the total evenly between 2, 3, or 4 people. The program should ask "How many people?" and show each person's share.
Read it: What formula is being used? Does it match what you learned in school?
Modify it: Ask AI to also convert to Kelvin (K = C + 273.15) and display all three values.
Read it: What does random.choice() do? What would happen if the list had only one item?
Modify it: Ask AI to add 5 more compliments to the list and make it ask "Want another one? (yes/no)" after the first compliment, looping until the user says no.
Today you built real programs using AI β one for each of the three core building blocks of code: sequence, selection, and iteration.
| Line | What happens | Value stored |
|---|---|---|
| 1 | Asks for wage, user types 15 | wage = 15 |
| 2 | Asks for hours, user types 40 | hours = 40 |
| 3 | Multiplies 15 Γ 40 | pay = 600 |
| 4 | Displays the result | "You earned $600.00 this week." |
| Input | Which branch fires? | Output |
|---|---|---|
| 45Β°F | else if (temp <= 60) | "It's chilly. Grab a jacket." |
| 80Β°F | else if (temp <= 80) β catches it exactly | "Perfect weather!" |
| -10Β°F | if (temp < 32) | "It's freezing! Stay inside." |
| "hot" | parseFloat("hot") = NaN β no branch matches cleanly | undefined / blank |
NaN β the program doesn't crash, but the output is wrong. This is a real-world validation gap.| Question | Answer |
|---|---|
| Where does the loop start? | i = 0 β the first item (Bread) |
| Where does it end? | When i < groceries.length is false β after item 5 |
| How many times does it run? | 5 times (indices 0 through 4) |
Today's theme: knowing when to hand the wheel to AI and when to take it back. Structured prompts, ethical judgment, and the question "should we?" vs. "how do we?"
One was written by a human. Two by AI. Which is which?
| Text | My Guess | Why |
|---|---|---|
| A β "Learning to code gives you the power to create solutionsβ¦" | AI | Correct-sounding and motivational, but generic. No specific story, no friction. Reads like an AI generating an inspirational opener. |
| B β "In today's rapidly evolving digital landscapeβ¦" | AI | Absolutely AI. "Rapidly evolving digital landscape," "indispensable asset," "competitive advantage" β corporate buzzword bingo. No human writes like this voluntarily. |
| C β "I started learning to code because I was tired of having ideasβ¦" | Human | Specific, personal, uncomfortable ("tired of having ideas"). "App that my neighborhood actually uses" β AI wouldn't invent that detail. The friction is the tell. |
AI Output (paraphrased): "A variable in JavaScript is a named storage location for data. You declare one using var, let, or const. Variables can hold numbers, strings, booleans, objects, or arrays. The value can be changed later unless declared with constβ¦" (continues for several paragraphs)
AI Output (paraphrased): "Think of a variable like a labeled sticky note. You write something on it β a number, your name, anything β and then you can read it or change it whenever you want. In JavaScript, you make one by writing let myName = "Penn"; β the label is myName, and "Penn" is what's written on it."
AI Output (paraphrased):
Variable β A named binding that associates an identifier with a value in memory.
Syntax:
Common mistake: Using var inside a loop and expecting it to be block-scoped. Use let instead.
| Prompt | Tone | Best for | Best code example? |
|---|---|---|---|
| No role | Textbook neutral | Someone who already knows a bit | No β buried in prose |
| Teacher role | Conversational, analogies | Me, right now | Simple, memorable |
| Technical writer | Structured reference | Looking something up quickly | Yes β three declarations in one block |
| Constraint | Result |
|---|---|
| Under 50 lines | 32 lines β |
| Navy background + amber headings | β exactly as specified |
| Three sections | β About, What I'm Learning, My Goal |
| No external files | β all in <style> tag |
| Mobile-friendly | β max-width + viewport meta |
| GitHub link | β |
Result: Generic. "Acme Corp" placeholder. Blue and gray. Hero text: "Welcome to Our Business." A services section with lorem ipsum. Nothing usable.
Result: Marcus's name in the about section. The exact tagline. Actual prices. Earthy green (#4a7c59) and warm brown (#8b5e3c) palette. The button says "Book a Walk." It looked like a real business website.
| Detail in prompt | Appeared in output |
|---|---|
| Owner name "Marcus" | β "Hi, I'm Marcus" in About section |
| Exact tagline | β Verbatim in hero h1 |
| Three service tiers with prices | β Cards with $20/$35/$15 |
| "Pet first aid certified" | β Mentioned in About |
| Earthy greens and warm browns | β Used in CSS variables |
| Book a Walk button | β Styled but non-functional as requested |
AI wrote: An email claiming "extensive experience in full-stack development," "expertise in React, Node.js, PostgreSQL, Python, Java, C++, AWS, Docker, Kubernetes, and machine learning," and calling itself an "invaluable asset."
Decision: Reject / Rewrite
Decision: Modify β Accept the function, ask for a simpler version to study
i += 6 trick skips multiples of 2 and 3, which I didn't know). I can use it β but I'd also ask for a naive version I can fully trace through. "This works. Now show me one that's slower but that a beginner can read line by line." Using code I can't explain is a liability.AI suggested: TruckTracker, StreetEats, FoodRoamer, BiteBus, RollUp
Decision: Modify β pick one, verify, then tweak
What went wrong: AI made things up. It inflated a prompt ("I'm learning to code") into a portfolio that doesn't exist. The student didn't catch it or chose not to. When the interview happened, the lie became visible and the trust was gone.
The line: AI can draft language, clean up what I wrote, and suggest framing. It cannot invent experience I don't have. My rule: if I can't back it up in a 5-minute conversation, it doesn't go in.
Right use: "Here are three bullet points about a project I did. Make them sound more professional." That's AI refining real truth β not generating fictional truth.
What should have happened: Read every line of the generated code before shipping it. If I couldn't explain a function, I should have asked AI to explain it, then asked follow-up questions until I could. "Working" and "understood" are different things. Working is the minimum. Understood is what lets you fix it when it breaks at 2am six months later.
The minimum standard: I can describe what each function does in plain language and identify where I'd look if something breaks. That's the floor for responsible delivery.
When asked to describe a "typical software engineer": male-leaning pronouns, mentions "he" or defaults to he/him, probably young, probably in a hoodie or office setting, probably white or East Asian in many outputs.
When asked to describe a "typical nurse": female-leaning, scrubs, gentle, maybe "she" by default.
Where it comes from: The internet. Specifically: news articles, job boards, blog posts, LinkedIn profiles, Reddit threads, Wikipedia entries. All of them reflect the world as it was historically documented, not as it should be. AI learned the pattern, not the exception.
Why it matters: If you use AI to write job descriptions, marketing copy, or character descriptions without checking, you're amplifying the same stereotypes that already make some people feel like they don't belong in the field. The fix isn't complicated β read the output and ask "who does this leave out?"
| AI Output | Decision | Why |
|---|---|---|
| Bio that said "aspiring developer with a passion for technology" | REJECTED β "Student in an AI builder program, learning to build with AI tools" | "Aspiring developer" is vague and performative. The replacement says what I'm actually doing right now. |
| Navy (#1a1a2e) background + amber (#f59e0b) headings as specified in the constraint prompt | MODIFIED β deep purple + crimson | The constraint prompt specified the course's default colors, not mine. I kept the structure and made it mine. |
| Skills section listing React, Node.js, PostgreSQL, Python | REJECTED β changed to "Currently Learning" section | I haven't learned those yet. A "Skills" section implying proficiency would be dishonest. "Currently Learning" is accurate and still shows direction. |
| Clean HTML structure (DOCTYPE, viewport meta, semantic section tags) | ACCEPTED | I can explain every tag. The structure is correct and I understand it. No reason to change what works. |
| About section that mentioned "I love solving complex problems" | REJECTED β removed | Generic filler. If I can't give an example of a complex problem I solved, I shouldn't claim I love solving them. |
| Footer with copyright Β© 2025 | MODIFIED β changed to 2026 | AI's training data had 2025 as the present year. It's 2026. Small thing, but accuracy matters. |
"AI β the kind we've been using this week β is basically a very sophisticated autocomplete. Imagine your phone's keyboard suggestion, but instead of suggesting the next word in a text message, it's been trained on most of the internet and can suggest the next word in an essay, a piece of code, or a whole conversation.
Here's the analogy I like: it's like a really well-read intern. They've read everything in the library β every book, article, and forum post. They're fast, never tired, and can draft a first version of almost anything. But they've never actually lived life. They don't know what's true right now, they can confidently say wrong things, and they'll tell you what you want to hear if you push them.
What AI is genuinely good at: first drafts, code boilerplate, summarizing long documents, brainstorming options, explaining things 10 different ways until one clicks.
What AI is bad at: knowing what's true right now, understanding your specific situation, making ethical calls, and knowing when to say 'I don't know.'"
| Did they understand it? | Jargon they caught | Strongest part |
|---|---|---|
| Yes | "Boilerplate" β defined it on the fly as "standard starter code, like a template" | The "well-read intern" analogy β partner said it was the first time AI felt intuitive to them |
| OS | Next Chapter Daily v1.9.0 |
| Display | β |
| Memory | β |
| Browser | β |