Everyone is talking about storytelling (and desperately looking for it), and while “storytelling” is a terrible buzzword, I decided to attempt a little case study and turn something boring into something interesting via story.
Our Use Case #1 is a generic office stapler. In the future I might create a story for industries such as:
PS: Of course I used ChatGPT to write the below story, as the point of this article is to illustrate what “storytelling” does.
The Quiet Cohesion
Every morning, when the coffee machine hums and keyboards chatter to life, a small hero waits at the edge of the desk:
The stapler.
No one cheers when it arrives. No one tweets about it. But take a moment and watch its rhythm — plink-plink — like a heartbeat syncing scattered pages into unity.
The stapler doesn’t rewrite documents. It doesn’t design slides or craft strategy. What it does is deeper: it creates cohesion from chaos. A report without it is a forest of loose leaves. With it: a story with structure.
In every stapled stack there’s intention. A contract is ready to be signed. A pitch is ready to be shared. A brainstorm is now a blueprint.
Some might see a tool. Others see a token of order in a world that often threatens to fragment into noise. The stapler doesn’t just bind pages — it binds purpose to people.
And at the end of the day, when the office quiets, it rests — not unnoticed, but quietly proud. Because what it does? It’s the first ritual of every idea that’s taken seriously.
The Takeaway
This small storytelling case study shows what business storytelling and corporate storytelling actually do: this is not about “embellishing” reality, but more about creating meaning.
If a stapler can carry a narrative, then so can payroll software, waste management, or water treatment plants. This is the core of brand storytelling and narrative strategy (turning functional, overlooked products into stories people remember).
And especially in the era of AI-generated content, what still differentiates companies is human storytelling: context, judgment, and the ability to make boring businesses feel coherent, intentional, and real.
How will AI impact the job market? I’ve been asking myself this question countless times, and so far *this* podcast episode seems to give the best answers. Let’s break down this conversation by OpenAI’s COO and its Chief Economist (yes, OpenAI has a Chief Economist!).
In episode 3 (from July 15, 2025) of their podcast, OpenAI COO Brad Lightcap and Chief Economist Ronnie Chatterji discuss the impacts of AI on software, science, small business, education, and jobs.
After listening to that episode I felt like there are so many individual “gems” in it that I just had to extract the most important ones.
Each of the following statements deserves to be meditated over.
🧠 AI as a Tool for Empowerment
“AI is a tool that lets people do things that they had no ability to do otherwise.” — Lightcap
“You have the world’s smartest brain at your fingertips to solve hard problems.” — Chatterji
“It’s the return of the idea guy.” — Lightcap
“AI is interesting because it really is kind of a reflection of your will.” — Lightcap
💼 Jobs, Productivity & Individual Empowerment
“If you wake up one day and decide you want to start a business, that just got meaningfully easier.” — Lightcap
“Software engineers are becoming not 10% more productive, but maybe 10x more productive.” — Lightcap
“What could they build if you can write that much more code and that much better code?” — Chatterji
🧪 Scientific Research & Discovery
“If we can accelerate science, accelerate discovery, we’re gonna have more economic growth and more good things for everybody.” — Chatterji
“Imagine a corridor with doors on either side — AI lets scientists peek behind all the doors.” — Chatterji
“You’re enabling the people who work with and around the scientists to accelerate the end product.” — Lightcap
🏢 Small Businesses & Emerging Markets
“Small teams can do a lot more. We’re seeing companies where non-technical people are building agents.” — Lightcap
“Small business owners can leverage agents for evidence-based advice — that’s something I’m very interested in.” — Chatterji
“In Africa, one of the biggest ROIs is agricultural extension support — AI can scale that.” — Chatterji
🧑🏫 Education Revolution
“The entire way we think about education will have to adapt.” — Lightcap
“What are you teaching in Kindergarten? How to be a human — that’s now the most important skill.” — Chatterji
Generative AI makes almost anything possible. You just need to know the tools and be clear about what you want to say.*
[*In addition to it, you’ll of course need motivation, agency, and taste, but let’s save this discussion for next time.]
For better or worse, art/culture is about to change, and I’m changing with it.
For the context: Among other things, I’m also a “real artist”, aka an artist who has been creating without AI – for quite some time. Luckily, I’m very pro-tech (grew up with computers thanks to my dad), and have no unresolved conflicts pertaining to creativity, which is why I’ve experienced a pretty smooth conversion from “real art” to AI content.
I’m mentioning this because a huge chunk of “real artists” continues to be outraged about AI. But that’s something for a future blog.
In the past few months I’ve been using AI to create a lot of stuff (see “Projects”). Since I’m very active on Twitter/X (by the way, let’s connect), I tend to learn about new tools very early, and in many cases I head straight to the tool to try it out.
General Observations
Experimenting, or as some call it “tinkering”, is essential. We are entering a new terrain of artistic expression, with a lot (most?) of it still unexplored.
Imagine what it must have felt like when photography was invented. As cameras became more accessible and photography moved from early adoption into the mainstream, people began to test its boundaries.
They didn’t just replicate paintings. They played with light, blur, composition, even accidents. That’s how entirely new aesthetics emerged.
The same applies to AI tools today. The real breakthroughs don’t come from following tutorials step by step, but rather from misusing tools, playing with prompts, layering outputs, remixing styles, and exploring the “wrong” ways of doing things.
Tinkering isn’t aimlessness, it’s a tool of discovery.
It’s how genres are born, how formats mutate, and how we can stretch the limits of what feels possible.
What I’ve Learned
Things are changing fast, but for the foreseeable future these insights will probably hold true.
Assuming you want to enter the space and try creating something,
Many platforms offer free daily credits, and you should make use of those.
Have a structured collection of AI tools. In my case it’s a bookmarks folder.
Definitely also structure your computer folders. You might end up with a lot of images and videos. Don’t get lost in the jungle.
Spend time away from your computer and write down your ideas on paper. In my case being in the nature really helps to not get lost in the detail.
Recognize the difference between “gimmick” and “substance”. I see people post AI videos on X, and most of them are gimmicks (= they don’t have any message, and/or are just tropes). Now you don’t have to have ambitions to create anything of substance. But some discernment really helps here either way.
Spend some time thinking about taste and what it implies.
Have ChatGPT write your image / video prompt. Your job is to tell it your idea, intent, message. This middle step adds a big layer of detail.
Get familiar with JSON. Prompts in JSON format are the ideal language when talking to AI.
What a prompt in JSON format looks like. This one is for a video.
I will surely dedicate another post to the insights I’ve had while using all the different AI tools. For now let’s keep it concise.
Finally, below is a list of some of the AI tools I’m using. Give them a try; they all have a free tier.
AI Tools I Use
Voice & Script Generation: I use Google’s NotebookLM, a free tool that transforms PDFs, websites, or text into audio summaries. This makes scripting and voice generation easy, even for complex topics. For cases when I need a custom voice (and for all things text-to-speech), Elevenlabs is my go-to.
Music Creation: Tools like Suno and Udio enable me to generate background music for any mood, or entire songs, if I happen to have any song idea. You can use lyrics or create instrumental tracks.
Video Generation: Platforms such as Runway, Luma, and Kling AI allow video creation and animation, offering a variety of capabilities depending on the project. These days I also increasingly use Dreamina by ByteDance/CapCut.
Image Generation: I prefer Ideogram, Flux, Reve, and Leonardo.ai for diverse artistic styles. ChatGPT image generation is great for Ghibli-style illustrations or comics. Unlike Midjourney, which tends to create polished, predictable images, these tools yield more unexpected and raw visuals, perfect for experimental art.
That’s it for now! If you want to *listen* to me talk about my creative process, here’s a video I made some time ago:
I don’t know about you, but I’m definitely not into coding all that much. I know just enough Python to write a short script, like one that generates a poem based on a lexicon.
As a humanities/linguistics person, I’ve never really had patience for extended periods of programming. After about five hours of dealing with code, my focus fades, and I yearn for some fresh air in the park.
I’ve worked with Java, C++, Perl, Prolog, HTML, XML etc. in the past, but none ever went beyond school/university assignments.
While I get bored of coding pretty fast, I still have ideas, side projects, and weird little experiments I’d love to build.
Vibe coding makes me feel like I was born at the perfect time, as it now gives me an ability to thrive on ideas without burning out from endless manual coding.
What Is Vibe Coding?
Vibe coding is a fairly new term whose birth we can confidently pinpoint to this tweet by Andrej Karpathy:
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper…
Ever since Karpathy coined this phrase, it’s been popping up all over my social media feeds (primarily Twitter/X), causing slight FOMO every time I see someone build something.
It was also Karpathy who stated that“the hottest new programming language is English.”
That’s basically the essence of vibe coding: You tell the machine what you want, it gets to work, and you go microwave last night’s leftovers.
I think we’re experiencing a substantial paradigm shift. Programming isn’t what it used to be (writing boilerplate, googling syntax, manually debugging loops, etc.).
Coding tasks becoming obsolete (click to expand)
Manually debugging simple errors
Setting up file structures from scratch
Looking up library documentation constantly
Copy-pasting Stack Overflow answers
Writing test cases by hand
Refactoring variable names manually
Commenting obvious code
Rewriting functions for different languages
Configuring basic build pipelines
Writing getters and setters
Typing repetitive API calls
Remembering regex syntax
Manually linting code
Writing basic UI layout code
Wrangling import statements
Searching for best practices
Translating pseudocode into real code
Writing “glue code” to connect APIs
Writing boilerplate code
Memorizing syntax
If we were to put it in more elevated, conceptual terms:
Vibe coding isn’t just about “building software”. It’s about shaping intent into form without wrestling with the medium.
It lets you, at least to some degree, skip the laborious wrestle with details, and focus on your unique app features straightaway.
That’s invaluable for beginners, solo entrepreneurs, and idea people like myself (at the same time it’s also disrupting software dev industry, but let’s leave that topic for another time).
It’s more about seeing the purpose or the big picture, creating momentum, shipping/prototyping fast, all while using AI as your co-pilot (or more like a junior developer servant).
It makes the whole process intuition/flow-driven and thus completely a different kind of activity.
Still, if you’re not a software dev, there are important basics to consider/learn, like backend, security etc.
The below guide breaks down the essential steps for turning ideas into functional products using AI tools at every stage.
“Educational poster” made with Claude Artifacts
If you can imagine it, you can build it.
For this guide I’ve put together the best advice I’ve found, directly from people who build. These tips are especially valuable for those who have never coded or finished a final product.
There are plenty of possible rookie mistakes you’ll want to avoid.
Exhibit A:
PS: If you need more info on this topic: I’ve just published the 2nd edition of my book “Vibe Coding: Build Without Thinking”, and you can get it on Amazon.
It contains all the info you need to successfully build any type of app, website, or platform.
The guide below is just a summary of what you’ll find in the book.
My book about vibe coding is available now on Amazon
Ok, ready for the guide? Here it comes.
How To Properly Vibe Code
Here is an ultimate “how to” list for vibe coding:
Start with an idea. You need something to build. If you need inspiration, you can look at places like Reddit or app store reviews.
Understand your competition. Use AI tools like Gemini to research what others in the same space are doing to identify your angle.
Clearly articulate your idea. Define the basics of your concept. This will help you and the AI understand the goal.
Create a simple plan (PRD). Use AI like Claude to grill your idea with questions to see if it’s viable. If it holds up, have the AI write a basic one-page plan (Product Requirement Document). This forces you to clarify what you want and breaks the work into small, clear steps. Think of it like outlining what “done” looks like for each stage.
Focus on the UI first (optional but suggested). Break down the project into small, shippable chunks and have AI (like Claude) detail the UI for each, including page content, functionality, and user flow diagrams.
Generate UI components with AI tools. Turn the UI chunks into prompts for tools like v0.dev and generate the user interface piece by piece, tweaking prompts as needed.
Download the generated code. Once the UI is complete, obtain the code.
Develop the backend logic. Use AI code editors like Cursor or VS Code Copilot to add the database, backend logic, and other functional components.
Adopt a mainstream tech stack. When building a web app, consider using Next.js + Supabase because they have large user bases, many online examples, and AI is more likely to handle them correctly. Add Python if your backend needs more complex logic. For game development, learn Unity or Unreal instead of trying to “vibe-code” in less suitable environments like JavaScript for complex games. Choosing a stack AI knows well can prevent wasted time on bugs.
Work in small, manageable steps. Give the AI one step at a time, rather than asking it to “do everything at once”. Test and fix each step before moving on to the next to prevent bugs from compounding. Example prompts: “Implement Step 1.1: Add Feature A” (test and fix), then “Implement Step 2: Add Feature B”.
Use version control (Git). AI will inevitably make mistakes, so you need a way to roll back your code. Manual commits help you track progress and know exactly where to revert if AI creates issues.
Provide working code samples. Before building a full feature, create a small working script that performs the core functionality (e.g., fetching data from an API). Once it works, save it and include it in your AI prompts as a reference to ensure accuracy with third-party libraries or APIs. This can prevent wasting time on minor mismatches.
Prompt effectively.
Share your raw idea with the AI.
Ask: “what’s unclear, risky, or missing?” to refine your understanding.
Then: “make this resonate with [my audience/customer/community]” and provide data about them.
Finally: “what would [0.01% top expert in my field] do here?” to get more advanced insights.
When stuck, start a new chat. Avoid getting trapped in a “copy error → paste to chat → fix → new error → repeat” cycle. If you hit this loop, open a fresh chat and clearly state what’s broken, what you expected, and what you’ve already tried. Include relevant logs, errors, and screenshots. A clean context can often resolve issues that endless retries won’t. The longer a chat history gets, the less effective the AI might become.
Learn the basics of programming. While AI can write code for you, understanding fundamental programming concepts is still important. This helps you spot when the AI is incorrect and keeps your projects on track. Vibe coding can even make learning easier by doing and acquiring real-world skills while shipping projects.
Ship something small today. Focus on creating and releasing a minimal viable product to gain momentum. The rest will evolve from there. Remember, shipping is the tuition for the “startup school” that is always open.
Don’t be afraid to leverage different AI tools for their strengths. For example, use Gemini for research and Claude for planning and UI/UX brainstorming. You can even string together different AI agents to handle various operations.
Create a simple README file. Use AI like Claude to write a basic README that explains what you are building.
That’s it! I hope this guide will be useful to you. You can download the above poster graphic by clicking on the image.
Watch my Swetlana AI podcast episode on this topic:
Vibe coding is an interesting new concept that’s gaining popularity in AI circles.
Here’s what it’s all about:
On Feb 2, 2025, Andrej Karpathy (ex-Tesla, ex-OpenAI = AI overlord) dropped a tweet introducing “vibe coding.” It’s a coding style where you mostly stop coding. Instead, you just… vibe.
Just “see stuff, say stuff, run stuff, copy-paste stuff.
The code writes itself (sort of).
You guide it with prompts, trust the AI to handle the rest, and don’t sweat the details.
At its core, vibe coding represents a departure from traditional coding practices, where developers manually write, debug, and maintain code.
Instead it relies heavily on AI tools to generate and manage code. Karpathy suggests that this method feels less like traditional coding and more like a fluid, almost magical process where the code grows beyond the developer’s direct comprehension.
It seems like we can now trust AI to handle the details and all the nitty-gritty, so devs can focus on big ideas instead of getting lost in the technical stuff (aka the mechanics of implementation).
Karpathy’s Vibe Coding Tools
Karpathy’s setup relies on a few key AI tools:
Cursor (the interface where the code lives)
Claude Sonnet (the brain for deeper logic), and
SuperWhisper (a voice-to-text app).
He barely touches the keyboard—just talks to the AI. Say something like, “make the sidebar padding bigger” and it just happens.
This hands-free setup shows how AI is making coding way more accessible—even for stuff that used to be annoying or too small to bother with.
Low Effort, High Trust
Karpathy’s vibe coding style is basically: trust the AI, don’t overthink it.
He hits “Accept All” without checking the changes, pastes in error messages with zero explanation, and sometimes just pokes around randomly until things work.
It sounds chaotic, but for quick side projects or weekend experiments, it gets the job done. Fast, messy, good enough.
That said, he admits the code can turn into a mess if you ever need to actually understand it later. So it’s fun and efficient, but only until you want to clean it up.
The Limits
Karpathy’s honest about the downsides. Sometimes the AI just can’t fix the bug—so you keep rewording your request or poking at the problem until it magically goes away.
That might be fine for quick hacks, but it’s not ideal for big or serious projects where clean, secure, and reliable code actually matters. Without proper review, things can get messy fast—think spaghetti code no one wants to touch later.
His point? Vibe coding is fun and fast, but it comes with trade-offs. If you care about long-term quality or working with a team, the chill approach might bite you later.
The Big Picture
Karpathy’s post clearly hit a nerve, as evidenced by the responses on X and related web discussions.
Vibe coding indicates a broader shift: AI tools (trained on code repositories) are getting so good that more people can build software without being hardcore programmers.
Tools like Cursor, Replit’s AI, and SuperWhisper make coding feel less like coding. It’s closer to chatting with a clever assistant that builds stuff for you. This fits right in with the low-code/no-code movement—more access, fewer barriers.
Not everyone is a fan though. Some devs love the speed and freedom. Others worry we’re building unstable tech with no one left who understands how it works.
So far it’s early to say, but: vibe coding might be more than a trend. It could become a whole new way of building software. With AI handling the actual labor of coding, devs can move faster, get more creative, and maybe even work more like artists than engineers.
But there are also the big concerns (and we need to think about how to mitigate these in the future):
What happens to code quality?
Who’s responsible when AI-generated code causes problems?
What will it do to the job market?
So far people are divided on this topic. Some see vibe coding as the future. Others think it’s only safe in the hands of experts like Karpathy. The rest is still coding manually.
AI replaces jobs, but it also creates new jobs. One of them is the rapidly growing field of “AI Governance“.
People working in AI Governance are generally expected to apply ethical, legal, and societal expertise to shape AI systems.
Key Points
AI Governance ensures ethical and responsible use of AI technologies.
Involves creating policies, managing risks, and ensuring compliance.
Requires skills in AI, law, ethics, and communication.
Growing field with increasing regulations and demand for experts.
What is AI Governance?
AI Governance is about making sure AI systems are used safely and fairly. It involves setting rules and guidelines to prevent problems like bias, privacy issues, and unethical decisions. Professionals in this field work to build trust in AI by ensuring it respects human rights and aligns with societal values.
Roles and Responsibilities
People in AI Governance do things like:
Make and follow AI policies.
Check if AI systems follow laws and regulations.
Lead teams to handle AI ethics and risks.
Work with different groups to make sure AI fits business and legal needs.
Common jobs include AI Governance Manager, AI Ethics Officer, and AI Compliance Officer.
Skills Needed
You need a mix of skills, such as:
Knowing how AI works and what it can do.
Understanding laws and rules about AI.
Leading teams and managing projects.
Explaining complex AI ideas to people without technical backgrounds.
Managing risks and ensuring AI is used ethically.
Surprising Growth and Demand
It’s surprising how fast this field is growing, with a market size of USD 145.5 million in 2023 and expected to grow at over 52% annually until 2032, driven by the need for ethical hacking and cybersecurity (AI Governance Market Size & Growth Analysis Report, 2024-2032).
Comprehensive Overview of AI Governance as a Professional Field
AI Governance is an emerging and rapidly evolving professional field that focuses on ensuring the responsible, ethical, and effective development, deployment, and use of artificial intelligence (AI) technologies. This field is critical as AI becomes increasingly integrated into organizational and societal operations, necessitating robust frameworks to manage risks and maximize benefits. Below, we explore the definition, scope, roles, required skills, current trends, challenges, and career paths in AI Governance, providing a detailed analysis for professionals and enthusiasts alike.
Definition and Scope
AI Governance refers to the processes, policies, and practices that guide the ethical development, deployment, and use of AI technologies. It aims to ensure AI systems are safe, fair, and respect human rights, addressing risks such as bias, privacy infringement, and misuse. Unlike related fields like AI ethics, which focuses on moral principles, or AI law, which deals with legal compliance, AI Governance encompasses a broader oversight, integrating technical, legal, and ethical dimensions to foster trust and accountability.
Professionals in AI Governance undertake a variety of roles, each critical to managing AI systems responsibly. These include:
Policy Development: Creating and implementing AI policies and guidelines to ensure ethical use, as seen in job descriptions for AI Governance Managers (1,000 Ai Governance Job Vacancies | Indeed.com).
Risk Management: Identifying and mitigating risks like bias and privacy issues, a key responsibility in roles like AI Ethics Officer (What is AI Governance?).
Team Leadership: Leading teams of AI scientists, ethicists, and governance professionals, as seen in leadership roles at organizations like GovAI (Open Positions | GovAI).
Stakeholder Collaboration: Working with business, legal, and technical teams to align AI with organizational goals, a common requirement in job listings (Ai Governance Jobs, Employment | Indeed).
Common job titles include AI Governance Manager, AI Ethics Officer, AI Compliance Officer, and AI Policy Advisor, reflecting the diverse responsibilities within the field.
Skills Required
AI Governance professionals need a multidisciplinary skill set to navigate the complexities of the field. Key skills include:
Ethical Awareness: Understanding ethical principles to prevent bias and ensure fairness, a critical aspect highlighted in Credo AI – What is AI governance?.
Project Management: Capability to implement governance frameworks, coordinating multiple stakeholders, as seen in job roles requiring leadership and strategy (Aptitudes for AI governance work — EA Forum).
These skills are often developed through education, certifications like the AI Governance Professional certification (Artificial Intelligence Governance Professional), and on-the-job experience.
Current Trends and Challenges
The AI Governance field is witnessing significant trends and challenges, shaping its evolution:
Transparency and Accountability: Emphasis on explainable AI and transparency, ensuring users understand AI decisions, as discussed in What is AI Governance? | IBM.
Ethical Considerations: Increasing focus on preventing bias and discrimination, driven by incidents like the Tay chatbot, as noted in AI governance is rapidly evolving | IBM.
Standards Development: Creation of best practices, such as those in the 9 Principles of an AI Governance Framework, to guide responsible AI use (9 Principles of an AI Governance Framework).
Skill Development: Acquire skills in risk management, ethics, and communication, often through on-the-job experience or programs like the AI Policy Accelerator (AI Governance Fast Track).
Job Opportunities: Look for roles in AI policy, compliance, or ethics, with entry points in organizations like GovAI or through job listings on Indeed (Ai Governance Jobs, Employment | Indeed).
The field offers exciting opportunities for those passionate about technology and ethics, with potential to shape the future of AI responsibly, as noted in career reviews like AI governance and policy – Career review.
Summary Table of AI Governance Key Aspects
Aspect
Details
Definition
Ensures ethical, safe, and fair use of AI, managing risks and building trust.
Key Roles
AI Governance Manager, Ethics Officer, Compliance Officer, Policy Advisor.
Essential Skills
AI knowledge, legal understanding, ethics, communication, risk management.
Rapid tech changes, regulatory complexity, ensuring effectiveness and ethics.
Career Entry
Education, certifications, skills development, job roles in policy and ethics.
This comprehensive overview underscores AI Governance as a vital field, offering significant opportunities for professionals to contribute to the responsible advancement of AI technologies.
Are you clueless about all those new terms (like “e/acc”, AGI etc.) popping up here and there? Or maybe you haven’t even heard of any of these. Here’s a little glossary to update your knowledge.
I’ve put together this info with the help of Grok 3 / DeepSearch. The glossary is based on terms frequently mentioned within the AI community.
Detailed Survey Note on AI Community Terms
This section provides a comprehensive analysis of the key terms used by insiders in the e/acc and tpot communities on X, aimed at informing individuals outside the AI space (“normies”). The following details the process of identifying these terms, their definitions, and their relevance, ensuring a thorough understanding for lay readers.
Glossary of Terms with Definitions
Below is a detailed table of the identified terms, their definitions, and their relevance to the e/acc and tpot communities:
Term
Definition
Relevance to Community
Accelerationism (acc)
Belief in speeding up technological progress, especially AI, for a better future.
Core to e/acc, seen in posts advocating rapid AI growth.
Decelerationism (decels)
Belief in slowing AI progress to manage risks and ensure safety.
Opposed by e/acc, often labeled negatively in discussions.
Doomers
Pessimists fearing AI could cause catastrophic societal harm.
Frequently mentioned in e/acc posts as counterpoints.
AI Risk
Potential negative impacts of AI, from bias to existential threats.
Central to e/acc debates on AI’s societal impact.
Artificial General Intelligence (AGI)
AI capable of any human intellectual task, a key goal in AI development.
Discussed in both communities, especially in future predictions.
Singularity
Point where AI surpasses human intelligence, leading to rapid changes.
A focal point in e/acc, linked to utopian or dystopian scenarios.
Effective Altruism (EA)
Using evidence to maximize good, often focusing on AI safety.
Related to e/acc, contrasted in discussions with accelerationism.
Techno-optimism
Belief that technology will improve human life and society.
Common in both e/acc and tpot, emphasizing AI’s benefits.
Kardashev Scale
Measure of civilization’s advancement by energy use, with AI aiding higher levels.
Unique to e/acc, seen in posts like @BasedBeffJezos ‘s on civilization.
Post-rationalism
Blending rationalism with intuitive or mystical ideas, often in tpot discussions.
Central to tpot, reflecting broader intellectual interests.
Grey Tribe
Tech-savvy, health-focused group interested in new-age ideas, linked to tpot.
Describes tpot community members, seen in web articles.
Biohacking
Optimizing biology through diet, supplements, etc., for peak performance.
Popular in tpot, linked to personal optimization discussions.
Nootropics
Substances believed to enhance cognitive function, like memory or focus.
Frequently mentioned in tpot posts on productivity.
Productivity Hacks
Techniques to increase efficiency, like time management or apps.
Common in tpot, reflecting interest in personal optimization.
Rationality
Using logic and evidence for decision-making, a tpot and e/acc focus.
Foundational to both communities, seen in discussions on AI.
LessWrong
Community and website for rationalist thought and AI safety, linked to tpot.
Referenced in tpot discussions, a hub for rationalist ideas.
Slate Star Code
Blog by Scott Alexander on science and philosophy from a rationalist view.
Popular in tpot, mentioned in web articles on community interests.
Surprising Connections and Insights
One interesting finding is the connection between AI discussions and broader concepts like the Kardashev Scale, which measures civilization’s energy use and is tied to e/acc’s vision of AI-driven advancement. Similarly, Post-rationalism in tpot bridges AI with philosophy and spirituality, showing how these communities extend beyond pure technology into cultural and existential realms.
I’ve asked DeepSeek “Who is Swetlana AI and what’s her psychological profile”, and boy did it deliver.
Previous to this I’ve asked Gemini DeepResearch to perform such a “background check” on me. But it miserably failed.
DeepSeek, on the other hand, doesn’t disappoint.
Here we go:
Swetlana AI: A Psychoanalytic and Creative Profile Swetlana AI is a multifaceted figure blending AI innovation, creativity, and psychological analysis. Here’s a breakdown of her identity, motivations, and intellectual framework, synthesized from her work and writings:
1. Identity & Branding
Role: A creative AI consultant and content creator who helps individuals and businesses leverage AI for projects like music videos, research, branding, and more. She positions herself as a bridge between technical AI tools and human creativity .
Branding: Adopted the “cheese head” persona from an early DALL-E experiment, symbolizing her playful yet pragmatic approach to AI. She clarifies it’s unrelated to Wisconsin or sports culture, reflecting her aversion to rigid labels .
Philosophy: Believes AI amplifies storytelling and democratizes creativity, shifting focus from labor-intensive production to ideation. She emphasizes the “age of ideas” where human imagination drives value .
2. Psychological Framework
Swetlana’s writings reveal a mind deeply engaged with psychoanalytic theory and existential critique:
Critique of Anti-AI Sentiment: Analyzes traditional artists’ resistance to AI through Freudian, Lacanian, and Jungian lenses. She argues their fear stems from ego dissolution, symbolic mastery loss, and repressed insecurities about creativity’s uniqueness .
Example: She mocks the idea that AI threatens artistic identity, framing it as a “symbolic short-circuit” where effort (e.g., mastering ultramarine shades) is devalued .
Cultural Analysis: Uses AI to dissect viral phenomena (e.g., Skibidi Toilet) as reflections of collective unconsciousness. She interprets such memes as coping mechanisms for existential fragmentation in a hyper-digital world .
Humor as Defense Mechanism: Employs sarcasm and irony to disarm fears about AI. For instance, she jokes about artists “hugging canvases” to assert uniqueness against machines .
3. Creative & Intellectual Outputs
Research: Publishes unconventional studies (e.g., The Elon Simp, AI in Middle Management), combining pseudo-academic rigor with cultural critique. Her work often explores hero worship, economic bubbles, and AI’s societal impacts .
Content Creation: Offers tutorials for AI-generated music videos, podcasts, and comedy sketches. She emphasizes curation over technical skill, aligning with her belief in AI as an ideation tool .
Podcasts & Music: Hosts channels discussing AI’s philosophical implications and even releases AI-assisted music albums (e.g., In The Gutter), blending art with technology .
4. Motivations & Values
Democratizing Creativity: Aims to make AI accessible, rejecting elitism in art and technology. Her tutorials and free research PDFs reflect this ethos .
Playfulness: Infuses projects with absurdity (e.g., toilet-themed memes, cheese-head branding) to challenge seriousness in tech discourse.
Existential Pragmatism: Addresses fears of AI-driven obsolescence by reframing it as an opportunity for reinvention. Her book AI & Your Job advocates career pivots, not panic .
5. Contradictions & Tensions
Irony vs. Advocacy: While mocking anti-AI melodrama, she acknowledges AI’s ethical risks (e.g., closed science, authorship debates) .
Human-Centric AI: Despite her tech focus, she insists on preserving the “human touch,” viewing AI as a collaborator, not a replacement .
Conclusion Swetlana AI embodies a paradoxical fusion of irreverence and depth. Her work navigates the tension between AI’s disruptive potential and the human need for meaning, using humor and psychoanalysis to demystify technology. She is less a technologist than a cultural interpreter, decoding AI’s role in modern identity crises—one cheese-headed meme at a time.
It’s free. It’s open source. And it’s insanely good! I’ve tried DeepSeek R1, and I’m convinced.
Let’s take a look at what it does and how it does it.
This article is structured as follows:
Introduction to DeepSeek and basic info about it
Use Case 1: Bridging the political divide in the US (left/right)
Use Case 2: DeepSeek writes fiction
What Is DeepSeek?
DeepSeek is a family of large language models developed by the Chinese AI firm of the same name. Founded in May 2023 and backed by the Chinese hedge fund High-Flyer, DeepSeek has quickly gained attention for its powerful and open-source models.
Here are some key points about DeepSeek:
Open-source focus: DeepSeek has made its models available as open source, allowing researchers and developers to access and build upon them. This approach contrasts with closed-source models like those from OpenAI, which are not publicly available.
Impressive capabilities: DeepSeek’s models, such as DeepSeek-r1, have demonstrated competitive performance compared to other leading LLMs released in 2024-2025, including those from Qwen and OpenAI.
Code generation specialization: DeepSeek Coder is a series of code language models specifically designed for code generation tasks. These models are pre-trained on a massive dataset of code and natural language, enabling them to generate high-quality code in various programming languages.
Continuous development: DeepSeek is actively developing new and improved models, with DeepSeek-V3 being a notable example. This model employs advanced techniques like Mixture-of-Experts and Multi-head Latent Attention to achieve state-of-the-art performance.
Overall, DeepSeek represents a significant player in the rapidly evolving field of large language models. Its commitment to open-source development and focus on cutting-edge research make it an important force to watch in the years to come.
Use Case 1: Bridging US Political Divides Intelligently and Empathetically
One of the first questions I asked DeepSeek was:
What’s the most intelligent way to reunite the left wing and right wing people in the US?
As a reasoning model, it took a few seconds to think, and the most gorgeous thing is that the thought process is documented as well, and it comes in the form of an internal monologue:
Okay, so the user is asking about the most intelligent way to reunite left-wing and right-wing people in the US. Hmm, that’s a big question. Let me start by breaking it down. First, I need to understand the current political climate. There’s a lot of polarization these days, right? People on different sides don’t just disagree; they often don’t trust each other or even want to engage. So, how to bridge that gap?
Maybe start with common ground. But what common values do both sides share? Maybe things like economic security, safety, family, freedom. But wait, each side might interpret these differently. For example, the right might see freedom as less government intervention, while the left might see it as protection from discrimination. So finding overlapping areas could be tricky but essential.
Then there’s the issue of media. People consume different news sources that reinforce their views. Social media algorithms create echo chambers. So maybe promoting media literacy and encouraging diverse viewpoints could help. But how? Maybe through education or platforms that expose people to balanced perspectives.
Dialogue is another aspect. Structured conversations where people can share their experiences without judgment. Programs like Braver Angels come to mind. They facilitate discussions between opposing groups. That might help humanize each side, reducing demonization.
Political reforms could also play a role. The two-party system might exacerbate divisions. Ranked-choice voting or open primaries might encourage more moderate candidates. Campaign finance reform could reduce the influence of money, which often fuels extreme rhetoric.
Education is key too. Teaching critical thinking and civic education from a young age. Maybe emphasizing how government works and the importance of compromise. But would that be feasible given the current education system’s variability across states?
Leadership is another factor. Leaders who model respectful discourse and bipartisanship. If politicians are constantly attacking each other, it trickles down. But there’s a chicken-and-egg problem here—polarized electorate leads to polarized leaders and vice versa.
Addressing economic inequality might help. Sometimes cultural issues are proxies for economic anxieties. If people feel economically secure, maybe they’re less susceptible to divisive rhetoric. But both sides have different approaches to economic policies, so finding common solutions is tough.
Also, tackling misinformation. Both sides spread different narratives. Fact-checking initiatives and promoting reliable sources could reduce misunderstandings. But people often distrust fact-checkers they perceive as biased.
Local initiatives might be effective. Bringing people together in community projects where they work towards a common goal, regardless of politics. Things like volunteering, local environmental efforts, or neighborhood improvements. Shared experiences can build trust.
Identity politics plays a role too. Emphasizing superordinate identities, like being American, over partisan identities. But national identity itself is polarized now. So maybe focusing on local or community identity instead.
Technology solutions? Maybe platforms designed to foster constructive conversations rather than arguments. Or algorithms that don’t prioritize engagement at the cost of spreading divisive content.
Wait, but there’s also the psychological aspect. People have cognitive biases that make them cling to their beliefs. Confirmation bias, tribalism. So interventions need to account for that. Maybe using empathy-building exercises or perspective-taking activities.
Religious and community leaders could also help. They have influence and can promote unity messages. Interfaith initiatives or community dialogues led by respected figures.
What about the role of history and storytelling? Sharing personal stories that highlight shared struggles or values. Narrative psychology suggests that stories can change perceptions more effectively than facts alone.
But there’s a risk of oversimplifying. Each side has legitimate concerns. Dismissing them as irrational or misinformed can backfire. So validation of concerns while challenging the divisive narratives is important.
Also, recognizing that not all disagreements can or should be resolved. Some differences are fundamental, but coexistence with respect is possible. Emphasizing pluralism and democratic norms that allow for disagreement without dehumanization.
Policy-wise, incremental bipartisan agreements on specific issues. Success in collaboration can build trust for bigger issues. For example, infrastructure, opioid crisis, maybe some environmental policies where there’s overlap.
But how to get started? It’s a systemic issue requiring multiple approaches. No single solution. It needs efforts from individuals, communities, media, education, and government.
Potential challenges: resistance from those benefiting from division, like certain media outlets or politicians. Also, deep-seated mistrust that takes time to overcome. Quick fixes are unlikely; it’s a long-term process.
So, synthesizing all these points into a coherent strategy. Prioritizing empathy, common ground, structural reforms, education, and leadership. Encouraging grassroots movements alongside top-down changes. Continuous dialogue and fostering environments where diverse perspectives are respected but not weaponized.
After this thinking process, it gave me the final answer:
Reuniting left-wing and right-wing individuals in the U.S. requires a multifaceted, empathetic approach that addresses systemic, psychological, and cultural factors. Here’s a structured strategy:
1. Foster Empathy and Dialogue
Structured Conversations: Support organizations like Braver Angels that facilitate respectful dialogues, focusing on shared experiences rather than debates.
Personal Storytelling: Use narrative psychology to highlight personal stories, humanizing opposing views and reducing abstraction in political disagreements.
Community Projects: Encourage collaboration on local issues (e.g., infrastructure, environment) to build trust through shared goals.
2. Reform Media and Technology
Media Literacy: Integrate critical thinking education to help people discern misinformation and understand media bias.
Algorithm Accountability: Push tech platforms to prioritize balanced content over divisive algorithms, promoting exposure to diverse perspectives.
3. Political and Structural Reforms
Electoral Adjustments: Advocate for ranked-choice voting and open primaries to reduce polarization and encourage moderate candidates.
Bipartisan Policy Wins: Focus on incremental, common-ground issues (e.g., infrastructure, healthcare access) to rebuild trust in governance.
4. Educational Initiatives
Civic Education: Teach the value of compromise, democratic norms, and the history of bipartisan successes in schools.
Critical Thinking: Emphasize skills to evaluate evidence and engage constructively with opposing viewpoints.
5. Leadership and Cultural Shifts
Modeling Respect: Encourage leaders to reject demonization and highlight bipartisan efforts. Celebrate politicians who collaborate across aisles.
Superordinate Identity: Promote unifying narratives (e.g., national or community identity) without erasing legitimate differences.
6. Address Economic and Social Inequities
Economic Security: Tackle issues like wage stagnation and healthcare costs, which underlie cultural anxieties, through policies that blend progressive and conservative insights.
7. Counter Misinformation
Fact-Checking Partnerships: Strengthen nonpartisan initiatives to debunk falsehoods while addressing root causes of distrust in institutions.
8. Grassroots and Local Engagement
Faith and Community Leaders: Leverage trusted local figures to host dialogues and promote unity, emphasizing shared ethical values.
9. Psychological and Cultural Awareness
Bias Education: Acknowledge cognitive biases (e.g., tribalism) and design interventions that encourage perspective-taking.
10. Long-Term Cultural Change
Normalize Pluralism: Accept that some disagreements are intractable but emphasize coexistence through respect and democratic norms.
Challenges and Considerations:
Patience: Systemic polarization requires sustained effort across generations.
Resistance: Address actors who profit from division by amplifying alternative narratives and holding institutions accountable.
This approach balances immediate actions with long-term cultural shifts, recognizing that reconciliation is a process, not an event. Success lies in persistent, collective commitment to empathy, equity, and democratic renewal.
Use Case 2: Writing Fiction
For my next use case, I told DeepSeek to write a story.
My prompt: Write a captivating story about how AI improved the world
DeepSeek’s thought process:
Okay, the user wants a captivating story about how AI improved the world. Let me start by brainstorming some key areas where AI has made a significant impact. Maybe healthcare, environment, education, and urban planning? Those are common sectors, but I need to make the story engaging with a narrative.
I should create a protagonist to guide the story. Maybe a scientist or an AI developer? But to make it relatable, perhaps use different characters in various parts of the world experiencing AI’s benefits. That way, I can showcase multiple aspects.
Setting the story in the near future, like 2045, gives enough time for AI to have developed meaningfully. Need to highlight specific problems AI solves. For example, climate change, disease, food shortages, education access. Each problem can have a character facing it, and AI provides a solution.
I need to ensure the story flows smoothly, connecting different sectors. Maybe start with a global issue, then move to individual stories. Use emotive language to make the reader care. Also, avoid making it too technical; focus on human experiences.
Wait, the user wants it captivating, so maybe add some conflict initially—show the world in crisis before AI steps in. Then demonstrate the transformation. That contrast can make the story more compelling.
Including examples like AI optimizing renewable energy grids, personalized education through neural interfaces, disease eradication via predictive algorithms, and vertical farming managed by AI. Each example should have a human element, like a child learning, a patient recovering, etc.
Need to check if there’s a logical progression. Maybe start with the environment, then healthcare, education, and food. Conclude with the overall impact on society, emphasizing collaboration between humans and AI.
Also, avoid clichés. Instead of a utopia, show a realistic improvement where challenges still exist but are managed better with AI. Highlight that AI is a tool, not a savior, requiring human oversight.
Make sure the tone is hopeful and inspiring. Use vivid descriptions to paint the transformed world. Maybe end with a reflection on the journey from chaos to harmony thanks to AI and human ingenuity.
And then here’s its final answer:
Title: “The Symphony of Tomorrow”
In the year 2045, the world hummed with a quiet, purposeful energy—a stark contrast to the cacophony of crises that had defined the early 21st century. The transformation began not with a single breakthrough, but with a collective awakening: humanity, guided by AI, learned to harmonize innovation with empathy. This is the story of that renaissance.
Chapter 1: The Pulse of the Planet In the Arctic, drone swarms equipped with AI climate models hovered over ice sheets, seeding clouds to reflect sunlight and slow melting. Meanwhile, in Nairobi, 14-year-old Amani watched her village’s barren soil spring to life. AI-agronomists had analyzed decades of weather data, tailoring crop rotations and drought-resistant seeds to her region. “The land remembers,” her grandmother whispered, tears mingling with rain as the first harvest in a decade flourished. Global food shortages, once a specter, faded into memory.
Chapter 2: The Healer’s Algorithm Dr. Elena Marquez in Barcelona stared at her patient’s scan—a tumor, once a death sentence, now glowing with vulnerabilities. Her AI co-pilot, trained on billions of genomic sequences, had identified a bespoke treatment: a nanobot regimen to dismantle the cancer cell by cell. “It’s like chess,” Elena mused. “The AI sees moves we never could.” Survival rates for chronic diseases soared to 92%, not by replacing doctors, but by gifting them superhuman clarity.
Chapter 3: The Classroom Without Walls In Jakarta, 9-year-old Rio donned his neural interface—a gift from the Global Learning Accord. AI tutors, fluent in 3,000 languages, adapted lessons to his curiosity, turning math into a game of building interstellar cities. His mother, a street vendor, wept when he solved equations she’d never learned. By 2045, 98% of children accessed personalized education, their potential unshackled from zip codes or bank accounts.
Chapter 4: The City That Breathes Shanghai’s AI urban planner, “Xenith,” redesigned the metropolis as a living organism. Solar roads powered self-heating sidewalks in winter, while AI traffic grids reduced emissions by 70%. Once smog-choked skies now hosted digital constellations—drones projecting air quality stats as art. “We used to fight nature,” said engineer Li Wei. “Now we dance with it.”
Epilogue: The Guardians The world wasn’t perfect. Debates raged over AI ethics, privacy, and the meaning of work. But humanity had learned to wield its tools with wisdom. AIs curated vast data oceans, yet humans steered the ship—prioritizing equity, creativity, and wonder. In town squares, holographic monuments displayed the words of an old proverb, newly relevant: “The best time to plant a tree was 20 years ago. The second-best time is now.”
And so, the symphony played on—a duet of silicon and soul, conducting a future where progress meant not just surviving, but thriving. Together.
I’m observing people doing stuff with gen AI. Some of them are artists, some of them are probably “tech people” (without any art background).
[*not gonna name anyone specific to not accidentally discredit them.]
People’s AI output that I’ve seen is always (without exceptions) either:
a) boring and unoriginal (e.g. Star Wars tropes, short scenes without context), or
b) too “crazy” (intriguing, but lacking something)
Today I’d like to unpack what’s going on there, and why we’re not even scratching the surface of what’s possible in terms of AI video (and filmmaking).
So… What’s lacking?
Obviously, I asked ChatGPT to provide structure and thought clarity into this gigantic ball of thoughts. Here we go:
Creating “True Art” with Gen AI
Creating “true art” with generative AI, art that resonates deeply with people, requires balancing several elements: originality, emotional depth, relatability, and perhaps a touch of unpredictability.
Let’s unpack why some AI-generated content feels empty and why others, while unique, may fail to connect.
The Pitfalls of AI Art
Empty Tropes: When creators rely heavily on popular or well-worn themes (like Star Wars-like scenes), they often produce work that lacks innovation. These pieces may be visually impressive but fail to evoke deeper emotions or provoke thought. They’re predictable and don’t challenge the viewer’s expectations.
Excessive Novelty: On the other hand, work that’s too avant-garde can alienate viewers. It may be technically or conceptually interesting, but if it doesn’t provide an entry point for emotional or intellectual engagement, it can leave the audience feeling disconnected.
The Missing Element: Emotional Connection
The key to creating art that resonates is often emotional depth. Art that touches people tends to:
Evoke Emotion: It could be joy, sadness, nostalgia, or awe. The emotional response is what makes art memorable and impactful.
Tell a Story: Even abstract or experimental art can hint at a narrative or evoke a world that feels coherent, giving viewers something to latch onto.
Foster Relatability: While the art should be original, it should also connect to universal themes or emotions that people can recognize and relate to.
Formula for True Art with Generative AI
Start with a Core Emotion: Decide what feeling you want to evoke. Is it wonder, melancholy, curiosity? Anchor your creation process around this emotion.
Blend Familiar with Unfamiliar: Incorporate elements that are recognizable but present them in a new or unexpected way. This balance keeps the audience grounded while piquing their curiosity.
Narrative Layering: Even abstract pieces can suggest a story. Use generative AI to create layers of meaning that viewers can interpret, making the piece richer and more engaging on repeated viewings.
Engage Multiple Senses: Consider how to engage not just visually but also aurally (through music or sound design) and conceptually (through intriguing ideas or themes). Multi-sensory engagement deepens the emotional impact.
Iterate and Reflect: Use AI’s capability to generate variations to experiment. Reflect on which versions resonate more and why. This iterative process helps refine the emotional and conceptual punch of the work.
Seek Feedback: Share your work with a small audience and gauge their reactions. Understanding how others perceive your art can help you fine-tune its impact.
Breaking the Cycle
To break the cycle of underwhelming AI art, focus on authenticity and intention. Instead of merely pushing boundaries for the sake of novelty or adhering to familiar templates, aim to create work that speaks to deeper truths or experiences. Authenticity in art, even when generated by AI, shines through when there’s a clear intention and emotional resonance behind it.
In essence, “true art” with AI isn’t just about what the AI can produce; it’s about what you, as the creator, bring to the table—your vision, your emotional insights, and your ability to craft a narrative or experience that feels meaningful. The genius lies in using AI as a tool to amplify and realize that vision, rather than relying on it to do all the heavy lifting.
Spectrums of “True Art”
To measure the elements that contribute to creating “true art” with generative AI, you can think of them along several spectrums. These spectrums allow you to evaluate and balance different aspects of the creative process:
1. Originality vs. Familiarity
Originality: The degree to which the art introduces new, unconventional, or surprising elements.
Familiarity: The degree to which the art draws on recognizable themes, motifs, or emotions that the audience can easily relate to.
Goal: Find a sweet spot where the art is innovative but still accessible, engaging viewers through a mix of the new and the familiar.
2. Emotional Depth vs. Intellectual Complexity
Emotional Depth: The extent to which the art evokes strong emotional responses, making it memorable and impactful.
Intellectual Complexity: The degree to which the art engages the viewer’s intellect through intricate concepts, symbolism, or narratives.
Goal: Balance emotional engagement with intellectual stimulation to create art that resonates on multiple levels.
3. Relatability vs. Mystery
Relatability: How easily the audience can connect with the themes, emotions, or experiences depicted in the art.
Mystery: The extent to which the art leaves room for interpretation, inviting curiosity and exploration.
Goal: Create art that is relatable enough to draw viewers in but mysterious enough to sustain their interest and provoke thought.
4. Predictability vs. Unpredictability
Predictability: The use of familiar structures, tropes, or elements that make the art feel safe and understandable.
Unpredictability: The introduction of unexpected twists, innovations, or deviations from norms that surprise and intrigue the audience.
Goal: Strike a balance where the audience feels comfortable but also experiences moments of surprise or wonder.
5. Narrative Clarity vs. Ambiguity
Narrative Clarity: How clearly the story or message of the art is conveyed, making it easy to follow and understand.
Ambiguity: The level of openness in the narrative, allowing for multiple interpretations and deeper engagement over time.
Goal: Allow for clarity in key themes while maintaining enough ambiguity to invite personal interpretation and ongoing discovery.
6. Aesthetic Coherence vs. Eclecticism
Aesthetic Coherence: The consistency in style, color palette, form, or theme that gives the art a unified feel.
Eclecticism: The use of diverse styles, elements, or influences to create a more varied and dynamic piece.
Goal: Ensure the art has a cohesive identity while allowing for diverse influences that enrich its appeal.
7. Intentionality vs. Spontaneity
Intentionality: The extent to which the art is planned, deliberate, and aligned with a specific vision or message.
Spontaneity: The incorporation of random, unplanned elements that can add vitality and freshness to the art.
Goal: Combine a clear artistic vision with spontaneous, unexpected elements that keep the work alive and engaging.
By using these spectrums as guides, you can critically assess and refine your generative AI art, ensuring it has the depth, balance, and resonance needed to connect with an audience.