AI in society. The parts that affect you as an everyday user — not as an engineer. What has changed in the last eighteen months, how to recognize it, and what to do.
From chatbot to copilot to agent — what changed and what it means for you.
Deepfakes, voice clones, and how to stay safe when anything can be faked.
What gets automated, what gets amplified, what stays human.
EU AI Act, GDPR, copyright — and the rights they give you.
For a decade, software waited for your click. Then it started drafting. Now it acts. This is the biggest mental-model change of 2025.
Excel, Salesforce, SAP. You learn the menus. Software waits for instructions.
Embedded assistants in email, docs, CRM. They suggest. You approve every output.
Goal-driven programs that plan, open apps, call systems, and complete tasks end-to-end.
This module is about the step from the middle column to the right.
From "per seat" (€20–30 / user / month) to per outcome — per resolved ticket, per processed invoice, per minute of work.
An agent running on your behalf can reach everything you can reach — including every over-shared folder and "anyone with the link" document.
If the agent misfiles, misrepresents, or misorders — you own it. The human is in the loop whether present or not.
Before you let an agent loose: narrow scope, draft-only mode, audit permissions, define a stop button.
A phone, three seconds of audio, and a free app. That's all it takes. The technology is neutral — the implications aren't.
Face or body swap, lip-sync alignment, full-body motion. Consumer-grade hardware.
Trained on as little as 3 seconds of clean audio. Usable for calls, voicemails, meetings.
Photorealistic, no camera required. DALL·E, Midjourney, Stable Diffusion, Flux.
Fake reviews, fake news sites, AI-written spam at industrial volume.
Humans identify deepfakes correctly around 60% of the time — barely above chance, and worse on high-quality samples. Every detector trains on yesterday's fakes; tomorrow's model is already out. And the real danger isn't only fakes being believed — it's real footage being dismissed because "anyone can fake a video now."
Urgent voice or video asking for money, credentials, approval? Hang up. Call back on a known number.
Agree on a phrase with family and finance. The real person knows it. A clone won't.
Especially when the content is perfectly aligned with what you already want to believe.
Content Credentials (C2PA), SynthID, platform labels. Absence isn't proof — presence is signal.
Three seconds of clean audio is enough. Public voice posts are training data for someone.
Almost no jobs are entirely automatable. Most jobs are a bundle of 10–30 tasks — and AI hits some of them hard, leaves others alone, and amplifies a third group.
Your response: learn to direct AI (prompting, tool selection, verification). Invest in what amplifies — your judgment, taste, relationships. Audit which of your tasks are becoming "draftable" and redirect time to higher-value work. From "What do I do?" to "What do I decide and review?"
Slowly, unevenly, and with real teeth in some places. Here's what you need to know as a user — not a compliance officer.
| Phase | Starts | What it covers |
|---|---|---|
| Prohibited uses | Feb 2025 | Social scoring, manipulative AI, untargeted facial scraping, real-time biometric ID — banned. |
| General-purpose AI | Aug 2025 | Foundation-model providers — transparency, safety testing, copyright disclosures. |
| High-risk systems | Aug 2026 | Hiring, credit, education, law enforcement — audits, conformance, human oversight. |
| Full application | Aug 2027 | Remaining categories, embedded AI in regulated products. |
Your practical rights: to know you're talking to an AI · to know content is AI-generated · to human review of high-stakes decisions · to an explanation of decisions affecting you · to lodge a complaint with national AI authorities.
AI is not something happening to you. It's something you participate in — as a worker, a user, a voter, a consumer. Participate deliberately.