How will AI impact the job market? I’ve been asking myself this question countless times, and so far *this* podcast episode seems to give the best answers. Let’s break down this conversation by OpenAI’s COO and its Chief Economist (yes, OpenAI has a Chief Economist!).
In episode 3 (from July 15, 2025) of their podcast, OpenAI COO Brad Lightcap and Chief Economist Ronnie Chatterji discuss the impacts of AI on software, science, small business, education, and jobs.
After listening to that episode I felt like there are so many individual “gems” in it that I just had to extract the most important ones.
Each of the following statements deserves to be meditated over.
🧠 AI as a Tool for Empowerment
“AI is a tool that lets people do things that they had no ability to do otherwise.” — Lightcap
“You have the world’s smartest brain at your fingertips to solve hard problems.” — Chatterji
“It’s the return of the idea guy.” — Lightcap
“AI is interesting because it really is kind of a reflection of your will.” — Lightcap
💼 Jobs, Productivity & Individual Empowerment
“If you wake up one day and decide you want to start a business, that just got meaningfully easier.” — Lightcap
“Software engineers are becoming not 10% more productive, but maybe 10x more productive.” — Lightcap
“What could they build if you can write that much more code and that much better code?” — Chatterji
🧪 Scientific Research & Discovery
“If we can accelerate science, accelerate discovery, we’re gonna have more economic growth and more good things for everybody.” — Chatterji
“Imagine a corridor with doors on either side — AI lets scientists peek behind all the doors.” — Chatterji
“You’re enabling the people who work with and around the scientists to accelerate the end product.” — Lightcap
🏢 Small Businesses & Emerging Markets
“Small teams can do a lot more. We’re seeing companies where non-technical people are building agents.” — Lightcap
“Small business owners can leverage agents for evidence-based advice — that’s something I’m very interested in.” — Chatterji
“In Africa, one of the biggest ROIs is agricultural extension support — AI can scale that.” — Chatterji
🧑🏫 Education Revolution
“The entire way we think about education will have to adapt.” — Lightcap
“What are you teaching in Kindergarten? How to be a human — that’s now the most important skill.” — Chatterji
Generative AI makes almost anything possible. You just need to know the tools and be clear about what you want to say.*
[*In addition to it, you’ll of course need motivation, agency, and taste, but let’s save this discussion for next time.]
For better or worse, art/culture is about to change, and I’m changing with it.
For the context: Among other things, I’m also a “real artist”, aka an artist who has been creating without AI – for quite some time. Luckily, I’m very pro-tech (grew up with computers thanks to my dad), and have no unresolved conflicts pertaining to creativity, which is why I’ve experienced a pretty smooth conversion from “real art” to AI content.
I’m mentioning this because a huge chunk of “real artists” continues to be outraged about AI. But that’s something for a future blog.
In the past few months I’ve been using AI to create a lot of stuff (see “Projects”). Since I’m very active on Twitter/X (by the way, let’s connect), I tend to learn about new tools very early, and in many cases I head straight to the tool to try it out.
General Observations
Experimenting, or as some call it “tinkering”, is essential. We are entering a new terrain of artistic expression, with a lot (most?) of it still unexplored.
Imagine what it must have felt like when photography was invented. As cameras became more accessible and photography moved from early adoption into the mainstream, people began to test its boundaries.
They didn’t just replicate paintings. They played with light, blur, composition, even accidents. That’s how entirely new aesthetics emerged.
The same applies to AI tools today. The real breakthroughs don’t come from following tutorials step by step, but rather from misusing tools, playing with prompts, layering outputs, remixing styles, and exploring the “wrong” ways of doing things.
Tinkering isn’t aimlessness, it’s a tool of discovery.
It’s how genres are born, how formats mutate, and how we can stretch the limits of what feels possible.
What I’ve Learned
Things are changing fast, but for the foreseeable future these insights will probably hold true.
Assuming you want to enter the space and try creating something,
Many platforms offer free daily credits, and you should make use of those.
Have a structured collection of AI tools. In my case it’s a bookmarks folder.
Definitely also structure your computer folders. You might end up with a lot of images and videos. Don’t get lost in the jungle.
Spend time away from your computer and write down your ideas on paper. In my case being in the nature really helps to not get lost in the detail.
Recognize the difference between “gimmick” and “substance”. I see people post AI videos on X, and most of them are gimmicks (= they don’t have any message, and/or are just tropes). Now you don’t have to have ambitions to create anything of substance. But some discernment really helps here either way.
Spend some time thinking about taste and what it implies.
Have ChatGPT write your image / video prompt. Your job is to tell it your idea, intent, message. This middle step adds a big layer of detail.
Get familiar with JSON. Prompts in JSON format are the ideal language when talking to AI.
What a prompt in JSON format looks like. This one is for a video.
I will surely dedicate another post to the insights I’ve had while using all the different AI tools. For now let’s keep it concise.
Finally, below is a list of some of the AI tools I’m using. Give them a try; they all have a free tier.
AI Tools I Use
Voice & Script Generation: I use Google’s NotebookLM, a free tool that transforms PDFs, websites, or text into audio summaries. This makes scripting and voice generation easy, even for complex topics. For cases when I need a custom voice (and for all things text-to-speech), Elevenlabs is my go-to.
Music Creation: Tools like Suno and Udio enable me to generate background music for any mood, or entire songs, if I happen to have any song idea. You can use lyrics or create instrumental tracks.
Video Generation: Platforms such as Runway, Luma, and Kling AI allow video creation and animation, offering a variety of capabilities depending on the project. These days I also increasingly use Dreamina by ByteDance/CapCut.
Image Generation: I prefer Ideogram, Flux, Reve, and Leonardo.ai for diverse artistic styles. ChatGPT image generation is great for Ghibli-style illustrations or comics. Unlike Midjourney, which tends to create polished, predictable images, these tools yield more unexpected and raw visuals, perfect for experimental art.
That’s it for now! If you want to *listen* to me talk about my creative process, here’s a video I made some time ago:
I don’t know about you, but I’m definitely not into coding all that much. I know just enough Python to write a short script, like one that generates a poem based on a lexicon.
As a humanities/linguistics person, I’ve never really had patience for extended periods of programming. After about five hours of dealing with code, my focus fades, and I yearn for some fresh air in the park.
I’ve worked with Java, C++, Perl, Prolog, HTML, XML etc. in the past, but none ever went beyond school/university assignments.
While I get bored of coding pretty fast, I still have ideas, side projects, and weird little experiments I’d love to build.
Vibe coding makes me feel like I was born at the perfect time, as it now gives me an ability to thrive on ideas without burning out from endless manual coding.
What Is Vibe Coding?
Vibe coding is a fairly new term whose birth we can confidently pinpoint to this tweet by Andrej Karpathy:
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper…
Ever since Karpathy coined this phrase, it’s been popping up all over my social media feeds (primarily Twitter/X), causing slight FOMO every time I see someone build something.
It was also Karpathy who stated that“the hottest new programming language is English.”
That’s basically the essence of vibe coding: You tell the machine what you want, it gets to work, and you go microwave last night’s leftovers.
I think we’re experiencing a substantial paradigm shift. Programming isn’t what it used to be (writing boilerplate, googling syntax, manually debugging loops, etc.).
Coding tasks becoming obsolete (click to expand)
Manually debugging simple errors
Setting up file structures from scratch
Looking up library documentation constantly
Copy-pasting Stack Overflow answers
Writing test cases by hand
Refactoring variable names manually
Commenting obvious code
Rewriting functions for different languages
Configuring basic build pipelines
Writing getters and setters
Typing repetitive API calls
Remembering regex syntax
Manually linting code
Writing basic UI layout code
Wrangling import statements
Searching for best practices
Translating pseudocode into real code
Writing “glue code” to connect APIs
Writing boilerplate code
Memorizing syntax
If we were to put it in more elevated, conceptual terms:
Vibe coding isn’t just about “building software”. It’s about shaping intent into form without wrestling with the medium.
It lets you, at least to some degree, skip the laborious wrestle with details, and focus on your unique app features straightaway.
That’s invaluable for beginners, solo entrepreneurs, and idea people like myself (at the same time it’s also disrupting software dev industry, but let’s leave that topic for another time).
It’s more about seeing the purpose or the big picture, creating momentum, shipping/prototyping fast, all while using AI as your co-pilot (or more like a junior developer servant).
It makes the whole process intuition/flow-driven and thus completely a different kind of activity.
Still, if you’re not a software dev, there are important basics to consider/learn, like backend, security etc.
The below guide breaks down the essential steps for turning ideas into functional products using AI tools at every stage.
“Educational poster” made with Claude Artifacts
If you can imagine it, you can build it.
For this guide I’ve put together the best advice I’ve found, directly from people who build. These tips are especially valuable for those who have never coded or finished a final product.
There are plenty of possible rookie mistakes you’ll want to avoid.
Exhibit A:
PS: If you need more info on this topic: I’ve just published the 2nd edition of my book “Vibe Coding: Build Without Thinking”, and you can get it on Amazon.
It contains all the info you need to successfully build any type of app, website, or platform.
The guide below is just a summary of what you’ll find in the book.
My book about vibe coding is available now on Amazon
Ok, ready for the guide? Here it comes.
How To Properly Vibe Code
Here is an ultimate “how to” list for vibe coding:
Start with an idea. You need something to build. If you need inspiration, you can look at places like Reddit or app store reviews.
Understand your competition. Use AI tools like Gemini to research what others in the same space are doing to identify your angle.
Clearly articulate your idea. Define the basics of your concept. This will help you and the AI understand the goal.
Create a simple plan (PRD). Use AI like Claude to grill your idea with questions to see if it’s viable. If it holds up, have the AI write a basic one-page plan (Product Requirement Document). This forces you to clarify what you want and breaks the work into small, clear steps. Think of it like outlining what “done” looks like for each stage.
Focus on the UI first (optional but suggested). Break down the project into small, shippable chunks and have AI (like Claude) detail the UI for each, including page content, functionality, and user flow diagrams.
Generate UI components with AI tools. Turn the UI chunks into prompts for tools like v0.dev and generate the user interface piece by piece, tweaking prompts as needed.
Download the generated code. Once the UI is complete, obtain the code.
Develop the backend logic. Use AI code editors like Cursor or VS Code Copilot to add the database, backend logic, and other functional components.
Adopt a mainstream tech stack. When building a web app, consider using Next.js + Supabase because they have large user bases, many online examples, and AI is more likely to handle them correctly. Add Python if your backend needs more complex logic. For game development, learn Unity or Unreal instead of trying to “vibe-code” in less suitable environments like JavaScript for complex games. Choosing a stack AI knows well can prevent wasted time on bugs.
Work in small, manageable steps. Give the AI one step at a time, rather than asking it to “do everything at once”. Test and fix each step before moving on to the next to prevent bugs from compounding. Example prompts: “Implement Step 1.1: Add Feature A” (test and fix), then “Implement Step 2: Add Feature B”.
Use version control (Git). AI will inevitably make mistakes, so you need a way to roll back your code. Manual commits help you track progress and know exactly where to revert if AI creates issues.
Provide working code samples. Before building a full feature, create a small working script that performs the core functionality (e.g., fetching data from an API). Once it works, save it and include it in your AI prompts as a reference to ensure accuracy with third-party libraries or APIs. This can prevent wasting time on minor mismatches.
Prompt effectively.
Share your raw idea with the AI.
Ask: “what’s unclear, risky, or missing?” to refine your understanding.
Then: “make this resonate with [my audience/customer/community]” and provide data about them.
Finally: “what would [0.01% top expert in my field] do here?” to get more advanced insights.
When stuck, start a new chat. Avoid getting trapped in a “copy error → paste to chat → fix → new error → repeat” cycle. If you hit this loop, open a fresh chat and clearly state what’s broken, what you expected, and what you’ve already tried. Include relevant logs, errors, and screenshots. A clean context can often resolve issues that endless retries won’t. The longer a chat history gets, the less effective the AI might become.
Learn the basics of programming. While AI can write code for you, understanding fundamental programming concepts is still important. This helps you spot when the AI is incorrect and keeps your projects on track. Vibe coding can even make learning easier by doing and acquiring real-world skills while shipping projects.
Ship something small today. Focus on creating and releasing a minimal viable product to gain momentum. The rest will evolve from there. Remember, shipping is the tuition for the “startup school” that is always open.
Don’t be afraid to leverage different AI tools for their strengths. For example, use Gemini for research and Claude for planning and UI/UX brainstorming. You can even string together different AI agents to handle various operations.
Create a simple README file. Use AI like Claude to write a basic README that explains what you are building.
That’s it! I hope this guide will be useful to you. You can download the above poster graphic by clicking on the image.
Watch my Swetlana AI podcast episode on this topic:
Vibe coding is an interesting new concept that’s gaining popularity in AI circles.
Here’s what it’s all about:
On Feb 2, 2025, Andrej Karpathy (ex-Tesla, ex-OpenAI = AI overlord) dropped a tweet introducing “vibe coding.” It’s a coding style where you mostly stop coding. Instead, you just… vibe.
Just “see stuff, say stuff, run stuff, copy-paste stuff.
The code writes itself (sort of).
You guide it with prompts, trust the AI to handle the rest, and don’t sweat the details.
At its core, vibe coding represents a departure from traditional coding practices, where developers manually write, debug, and maintain code.
Instead it relies heavily on AI tools to generate and manage code. Karpathy suggests that this method feels less like traditional coding and more like a fluid, almost magical process where the code grows beyond the developer’s direct comprehension.
It seems like we can now trust AI to handle the details and all the nitty-gritty, so devs can focus on big ideas instead of getting lost in the technical stuff (aka the mechanics of implementation).
Karpathy’s Vibe Coding Tools
Karpathy’s setup relies on a few key AI tools:
Cursor (the interface where the code lives)
Claude Sonnet (the brain for deeper logic), and
SuperWhisper (a voice-to-text app).
He barely touches the keyboard—just talks to the AI. Say something like, “make the sidebar padding bigger” and it just happens.
This hands-free setup shows how AI is making coding way more accessible—even for stuff that used to be annoying or too small to bother with.
Low Effort, High Trust
Karpathy’s vibe coding style is basically: trust the AI, don’t overthink it.
He hits “Accept All” without checking the changes, pastes in error messages with zero explanation, and sometimes just pokes around randomly until things work.
It sounds chaotic, but for quick side projects or weekend experiments, it gets the job done. Fast, messy, good enough.
That said, he admits the code can turn into a mess if you ever need to actually understand it later. So it’s fun and efficient, but only until you want to clean it up.
The Limits
Karpathy’s honest about the downsides. Sometimes the AI just can’t fix the bug—so you keep rewording your request or poking at the problem until it magically goes away.
That might be fine for quick hacks, but it’s not ideal for big or serious projects where clean, secure, and reliable code actually matters. Without proper review, things can get messy fast—think spaghetti code no one wants to touch later.
His point? Vibe coding is fun and fast, but it comes with trade-offs. If you care about long-term quality or working with a team, the chill approach might bite you later.
The Big Picture
Karpathy’s post clearly hit a nerve, as evidenced by the responses on X and related web discussions.
Vibe coding indicates a broader shift: AI tools (trained on code repositories) are getting so good that more people can build software without being hardcore programmers.
Tools like Cursor, Replit’s AI, and SuperWhisper make coding feel less like coding. It’s closer to chatting with a clever assistant that builds stuff for you. This fits right in with the low-code/no-code movement—more access, fewer barriers.
Not everyone is a fan though. Some devs love the speed and freedom. Others worry we’re building unstable tech with no one left who understands how it works.
So far it’s early to say, but: vibe coding might be more than a trend. It could become a whole new way of building software. With AI handling the actual labor of coding, devs can move faster, get more creative, and maybe even work more like artists than engineers.
But there are also the big concerns (and we need to think about how to mitigate these in the future):
What happens to code quality?
Who’s responsible when AI-generated code causes problems?
What will it do to the job market?
So far people are divided on this topic. Some see vibe coding as the future. Others think it’s only safe in the hands of experts like Karpathy. The rest is still coding manually.
AI replaces jobs, but it also creates new jobs. One of them is the rapidly growing field of “AI Governance“.
People working in AI Governance are generally expected to apply ethical, legal, and societal expertise to shape AI systems.
Key Points
AI Governance ensures ethical and responsible use of AI technologies.
Involves creating policies, managing risks, and ensuring compliance.
Requires skills in AI, law, ethics, and communication.
Growing field with increasing regulations and demand for experts.
What is AI Governance?
AI Governance is about making sure AI systems are used safely and fairly. It involves setting rules and guidelines to prevent problems like bias, privacy issues, and unethical decisions. Professionals in this field work to build trust in AI by ensuring it respects human rights and aligns with societal values.
Roles and Responsibilities
People in AI Governance do things like:
Make and follow AI policies.
Check if AI systems follow laws and regulations.
Lead teams to handle AI ethics and risks.
Work with different groups to make sure AI fits business and legal needs.
Common jobs include AI Governance Manager, AI Ethics Officer, and AI Compliance Officer.
Skills Needed
You need a mix of skills, such as:
Knowing how AI works and what it can do.
Understanding laws and rules about AI.
Leading teams and managing projects.
Explaining complex AI ideas to people without technical backgrounds.
Managing risks and ensuring AI is used ethically.
Surprising Growth and Demand
It’s surprising how fast this field is growing, with a market size of USD 145.5 million in 2023 and expected to grow at over 52% annually until 2032, driven by the need for ethical hacking and cybersecurity (AI Governance Market Size & Growth Analysis Report, 2024-2032).
Comprehensive Overview of AI Governance as a Professional Field
AI Governance is an emerging and rapidly evolving professional field that focuses on ensuring the responsible, ethical, and effective development, deployment, and use of artificial intelligence (AI) technologies. This field is critical as AI becomes increasingly integrated into organizational and societal operations, necessitating robust frameworks to manage risks and maximize benefits. Below, we explore the definition, scope, roles, required skills, current trends, challenges, and career paths in AI Governance, providing a detailed analysis for professionals and enthusiasts alike.
Definition and Scope
AI Governance refers to the processes, policies, and practices that guide the ethical development, deployment, and use of AI technologies. It aims to ensure AI systems are safe, fair, and respect human rights, addressing risks such as bias, privacy infringement, and misuse. Unlike related fields like AI ethics, which focuses on moral principles, or AI law, which deals with legal compliance, AI Governance encompasses a broader oversight, integrating technical, legal, and ethical dimensions to foster trust and accountability.
Professionals in AI Governance undertake a variety of roles, each critical to managing AI systems responsibly. These include:
Policy Development: Creating and implementing AI policies and guidelines to ensure ethical use, as seen in job descriptions for AI Governance Managers (1,000 Ai Governance Job Vacancies | Indeed.com).
Risk Management: Identifying and mitigating risks like bias and privacy issues, a key responsibility in roles like AI Ethics Officer (What is AI Governance?).
Team Leadership: Leading teams of AI scientists, ethicists, and governance professionals, as seen in leadership roles at organizations like GovAI (Open Positions | GovAI).
Stakeholder Collaboration: Working with business, legal, and technical teams to align AI with organizational goals, a common requirement in job listings (Ai Governance Jobs, Employment | Indeed).
Common job titles include AI Governance Manager, AI Ethics Officer, AI Compliance Officer, and AI Policy Advisor, reflecting the diverse responsibilities within the field.
Skills Required
AI Governance professionals need a multidisciplinary skill set to navigate the complexities of the field. Key skills include:
Ethical Awareness: Understanding ethical principles to prevent bias and ensure fairness, a critical aspect highlighted in Credo AI – What is AI governance?.
Project Management: Capability to implement governance frameworks, coordinating multiple stakeholders, as seen in job roles requiring leadership and strategy (Aptitudes for AI governance work — EA Forum).
These skills are often developed through education, certifications like the AI Governance Professional certification (Artificial Intelligence Governance Professional), and on-the-job experience.
Current Trends and Challenges
The AI Governance field is witnessing significant trends and challenges, shaping its evolution:
Transparency and Accountability: Emphasis on explainable AI and transparency, ensuring users understand AI decisions, as discussed in What is AI Governance? | IBM.
Ethical Considerations: Increasing focus on preventing bias and discrimination, driven by incidents like the Tay chatbot, as noted in AI governance is rapidly evolving | IBM.
Standards Development: Creation of best practices, such as those in the 9 Principles of an AI Governance Framework, to guide responsible AI use (9 Principles of an AI Governance Framework).
Skill Development: Acquire skills in risk management, ethics, and communication, often through on-the-job experience or programs like the AI Policy Accelerator (AI Governance Fast Track).
Job Opportunities: Look for roles in AI policy, compliance, or ethics, with entry points in organizations like GovAI or through job listings on Indeed (Ai Governance Jobs, Employment | Indeed).
The field offers exciting opportunities for those passionate about technology and ethics, with potential to shape the future of AI responsibly, as noted in career reviews like AI governance and policy – Career review.
Summary Table of AI Governance Key Aspects
Aspect
Details
Definition
Ensures ethical, safe, and fair use of AI, managing risks and building trust.
Key Roles
AI Governance Manager, Ethics Officer, Compliance Officer, Policy Advisor.
Essential Skills
AI knowledge, legal understanding, ethics, communication, risk management.
Rapid tech changes, regulatory complexity, ensuring effectiveness and ethics.
Career Entry
Education, certifications, skills development, job roles in policy and ethics.
This comprehensive overview underscores AI Governance as a vital field, offering significant opportunities for professionals to contribute to the responsible advancement of AI technologies.
Grok is a fantastic tool for looking at what’s happening (and evaluating it), because in addition to being an AI it accesses what people are actually saying. Today I asked it about different use cases of how AI is helping people in their work.
[Model used: Grok 2]
My prompt:
What are people on X saying about how AI is helping them accomplish their things? What are their use cases?
Grok’s reply:
People on X are discussing various ways AI is assisting them in their daily tasks and professional endeavors.
Here are some of the use cases mentioned:
Content Creation: Users are employing AI for generating images, particularly for live events, indicating AI’s utility in visual content creation. AI is also used for writing and analyzing content, providing insights and creating detailed reports with case studies.
Programming and Technical Support: AI serves as an aid for coding, helping rudimentary programmers by providing code suggestions, explanations, and even writing code. This includes using AI to understand and generate code more efficiently.
Decision Making and Knowledge Repository: AI is utilized as an encyclopedia or a knowledge repository, aiding in decision-making processes by synthesizing information from various sources.
Productivity and Efficiency: There’s significant mention of AI enhancing productivity across different sectors like sales, support, operations, and administration. Use cases include AI agents managing inboxes, calendars, CRM systems, and other tools, saving considerable time (up to 5 hours a day) by automating routine tasks.
Document Conversion and Analysis: AI is praised for its ability to convert documents like PDFs into tables and perform preliminary analysis, which can then be reviewed manually for accuracy.
Ideation and Research: AI helps in brainstorming and research by analyzing and structuring thoughts or by providing in-depth analysis on otherwise mundane topics, enhancing creativity and strategic thinking.
Customer Interaction: AI is used in customer service for tasks like generating meeting minutes or handling customer support queries, improving efficiency and customer satisfaction.
These insights from X reflect a broad spectrum of applications where AI is not just a tool but an integral part of daily professional life, enhancing both the quality and speed of work.
My prompt:
List up 20 specific cases you have found
Grok’s reply:
Here are 20 specific cases where AI is helping with tasks, as found in various discussions on X:
Live Code Tutoring – AI provides real-time coding assistance and explanations.
AI Research Assistant – Assists in gathering and synthesizing research data from various sources.
Interactive Mixology – AI offers recipe suggestions and mixology tips for bartenders.
Real-Time Piano Instruction – Provides immediate feedback and lessons to piano learners.
Smart Kitchen Assistant – Helps with meal planning, recipe suggestions, and inventory management.
Interactive Gaming Companion – Enhances gameplay with AI-driven interactions or strategy advice.
AI Notes – Automatically transcribes and organizes notes during meetings or lectures.
Voice Dictation – Transcribes spoken words into text with high accuracy.
AI Slide Decks – Generates presentations based on given data or topics.
Writing and Email Automation – Assists in creating content or automating email responses.
Document Summarization – Summarizes lengthy documents or articles for quick review.
Collaborative Research – Facilitates collaborative work by managing and analyzing shared research data.
Workflow Automation – Automates repetitive workflow tasks based on user behavior patterns.
AI for Email Marketing – Personalizes email campaigns and optimizes send times.
Website Building – Assists in creating websites through AI-driven design and content suggestions.
Image Generation – Creates visual content for various applications like marketing or social media.
Copywriting – Generates ad copy, blog posts, or other content tailored to specific audiences.
Meeting Notes – Automates the taking of notes during meetings, including action items and summaries.
Research Tools – Provides quick answers to complex queries or aids in academic research.
Productivity Apps – Integrates AI to manage calendars, emails, and other productivity tools for efficiency.
These cases illustrate the diverse applications of AI in enhancing productivity, creativity, and efficiency across different sectors and personal tasks.
I asked Grok to analyze people’s tweets globally, and tell me which AI tools they tend to use.
One of Grok’s priceless features is the access to what people are saying, in real time. That’s one of the things that ChatGPT can’t do.
So: What AI tools do people actually use? Here’s what Grok says.
Grok Breaks Down People’s Favorite AI Tools
Here’s a more detailed breakdown of AI tools based on their mentions and praised uses from X posts:
Perplexity AI
Use: Primarily used for advanced search capabilities, providing focused and academic research.
Benefits: Users appreciate its ability to integrate with other models like Claude, offer detailed insights from specific URLs or file uploads, and its ease in sharing and managing information. It’s particularly praised for its vertical integration which allows for tailored searches.
V0 (by Vercel)
Use: A tool for building web UI components.
Benefits: It’s noted for its simplicity and ease of use for frontend design, making it beneficial for those who want to quickly prototype or design web interfaces.
Bolt, Cursor AI, Windsurf AI
Use: These tools are generally used for coding assistance.
Benefits: Bolt might be more suited for beginners, while Cursor and Windsurf are praised for their deep integration into the coding workflow, providing real-time code suggestions, debugging assistance, and productivity boosts. Users find these tools vital for speeding up coding tasks and improving code quality.
Claude
Use: General productivity, particularly for writing and technical tasks.
Benefits: Known for its nuanced understanding of context, which makes it excellent for tasks requiring detailed or technical writing. It’s also valued for its coding assistance capabilities.
ChatGPT
Use: Quick queries, document creation, and general information gathering.
Benefits: Users highlight its versatility, from answering straightforward questions to creating comprehensive documents. It’s seen as an all-purpose tool for many due to its conversational nature and broad knowledge base.
Abacus AI
Use: Not as widely discussed, but mentioned in contexts of general productivity or potential future use.
Benefits: While specifics are not detailed, the mention suggests it’s viewed with optimism for its capabilities, possibly in data analysis or custom AI solutions.
Kapwing, InVideo, Synthesia, Kling, Pixverse
Use: Focused on video creation and editing.
Benefits: These tools are praised for simplifying video production. Kapwing and InVideo are noted for their editing capabilities, Synthesia for AI-generated video content, and Kling and Pixverse for innovative video generation, especially in creative or marketing contexts.
SEMrush
Use: SEO and digital marketing.
Benefits: Users value it for its comprehensive digital marketing tools, including SEO, content marketing, and competitive analysis, which are crucial for enhancing online visibility.
Midjourney, DALL-E
Use: Image generation from text prompts.
Benefits: These are celebrated for their creative outputs, allowing users to visualize concepts or create art with high quality and diverse styles. They’re particularly useful for designers, marketers, or anyone in need of visual content.
Canva AI
Use: Graphic design automation.
Benefits: By integrating AI, Canva offers features like automated design suggestions, background removal, and more, simplifying the design process for users without extensive design skills.
As opposed to musicians, who seem to be chill with AI, visual artists tend to be very emotional about this topic. Their emotions are largely negative. Hence I wrote a book, to help them understand why they feel the way they feel, and what to do about it.
My book is called “Learning to Cope With AI: Anti-AI Artist’s Guide”.
It’s made for anti-AI artists who could consider making peace with AI, by looking at the reasons why this topic makes them so emotional.
Here’s one example of artists being emotional:
Here’s another one:
Why so much intensity?
Well, traditional artists like to imagine themselves as singular geniuses in a world that needs their exclusive touch, but AI challenges this by producing creations that look like they could have come from any one of us.
Why do you fear me, I’m just a machine (one of my AI music videos / Robot’s Lament by Swetlana AI)
I guess my book might touch some nerves here and there, because it just needs to be said:
AI might be pushing us into a more honest era of creativity, one that values not just skill or style but ideas and concepts that machines can’t replicate.
Inside the book, you’ll discover:
Why your fear of AI is really about your identity (spoiler: it’s not about the tech)
How to transform your artistic anxiety into creative fuel
Why your “unique artistic voice” might actually be unique after all
Practical exercises for embracing AI without selling your creative soul
A surprisingly effective method for giving your AI doppelgänger a metaphorical hug
I’ve also had NotebookLM generate a podcast about the book. Here it is:
And here is the transcript of the podcast:
AI Reviews Swetlana AI’s Book “Learning to Cope with AI” [NotebookLM Podcast Transcript]
All right. So, uh, anyone else just a little freaked out by all this AI art stuff, or is it just me?
I mean, it’s definitely, uh, shaking things up a bit.
Shaking things up. It’s like the whole art world is having a collective existential meltdown.
Yeah. Well, I think that’s kind of the point.
Okay. So, maybe that’s why we’re diving into this, uh, this book today, Learning to Cope by Swetlana AI.
Oh, yeah. The irony is not lost on me. An AI writing a self-help book for artists freaking out about AI.
Right. It’s like the ultimate meta gag. But uh let’s back up a sec for our listeners. Who is this Swetlana AI anyway?
So Swetlana AI is actually pretty fascinating. She’s a computational linguist, which means she understands like the nuts and bolts of language.
Okay, so she’s not just some random algorithm spitting out platitudes.
Nope, she’s also a writer. And get this, she’s got an art background, too.
So she’s like a triple threat. The AI renaissance woman,
pretty much. And in this book, she tackles six major anxieties that artists are supposedly facing thanks to AI.
Okay, so less Skynet takes over the art world and more. Dear diary, my algorithm is making me question my existence.
Exactly. It’s like a therapy session for the collective artistic ego.
All right. I mean, let’s dissect this thing. So, anxiety number one, fear of ego dissolution.
Yeah. That feeling that AI is your artistic doppelganger, churning out work in your style faster than you can say imposttor syndrome.
It’s like looking in a mirror and seeing, I don’t know, a robot version of yourself, but with better brush strokes,
right? And that can be pretty unsettling. So what Lana suggests that we try to reframe this anxiety?
Reframe it. How do you reframe the feeling that your entire artistic identity is being ripped off by a computer?
Well, she suggests thinking of AI as an extension of your creative potential. Like your shadow artist.
Your shadow artist. Okay, that sounds a little creepy.
Think of it as your evil twin but with better rendering skills.
So creepy. So instead of seeing AI as competition, we’re supposed to invite it to our next gallery opening.
She actually suggests collaborating with it. Imagine using AI to handle the technical stuff like the grunt work.
And that frees us up to focus on like the conceptual stuff, the emotions, the things that actually make art meaningful.
That’s the idea. AI becomes a tool to expand your artistic ego, not dissolve it.
I mean, I get that in theory, but a lot of artists have spent years honing their craft, mastering their technique, years of blood, sweat, and tears.
Exactly. And now they see AI spitting out photorealistic masterpieces in seconds. And it’s going to feel like a slap in the face, right? Like what’s the point of all that hard work if a computer can do it better and faster?
That’s the second anxiety Swetlana tackles. Loss of symbolic mastery.
Yeah. The feeling that you’ve spent years learning the secret handshake of your art form only to find out the robots have already hacked the system.
It’s like suddenly the robots are fluent in, I don’t know, ancient Samrian or something, and you’re still struggling with basic French.
Exactly. Swetlana argues that AI mastering technique is like a kid learning all the chess moves without understanding strategy.
So, they can move the pieces, but they can’t actually play the game, right? True mastery comes from the meaning you infuse through those techniques, the story you tell, the emotions you evoke.
It’s not just about technical prowess. It’s about something deeper.
Yeah. And that’s something AI can’t replicate.
At least not yet. But what about the fact that AI can, you know, dig up all those artistic insecurities we try to hide?
Oh, yeah. That’s a good one. Switana calls this one embracing the shadow.
So, it’s like AI is this giant fun house mirror reflecting back all our anxieties and self-doubt
pretty much. It’s like confronting your artistic demons but in pixelated form.
So, we’re talking full-blown young and nightmare fuel. I’m not sure I signed up for this.
But here’s where Swetlana flips the script. She sees AI as a tool for self-awareness.
Self-awareness. How does staring into the abyss of my artistic insecurities help me become more self-aware?
By confronting those insecurities that AI triggers, you’re forced to look at those shadow selves, those darker aspects of your creative process.
And that’s supposed to be a good thing.
According to Swetlana, yes, facing those fears helps you clarify what really matters in your art. What makes it uniquely yours?
Okay, so my AI induced existential crisis is actually a sign of artistic growth. This is starting to sound a little too newagy for me.
Well, Swetlana delves into Young’s shadow concept. The idea that integrating those hidden parts of ourselves is essential for personal growth.
Okay, but before we go fullon Yian, on our listeners. Can we at least take a break to make fun of some bad AI art? Uh,
sure, why not?
All right, let’s find some of the most hilariously bad AI art the internet has to offer.
Okay, so before we get lost in the AI art hall of shame, Swetlana has another anxiety for us to unpack. The healthy kind of narcissism.
Wait, narcissism? Isn’t that like a bad thing? You know, like I’m an artist, therefore I’m better than you kind of vibe,
right? But Swetlana is talking about the healthy kind of narcissism. The belief in your own vision, the confidence to express it. It’s what drives creativity.
Okay. How does that fit in with all this AI stuff? It feels like AI is triggering the opposite of healthy narcissism.
It’s true. AI can make artists feel threatened, like their turf is being invaded.
It’s like everyone’s suddenly building a wall around real art and demanding that AI pay for it.
Exactly. It’s that defensive kind of narcissism. But Swetlana’s point is we need to reclaim the healthy kind, the kind that celebrates our uniqueness and uses AI as a foil to refine in our own voice.
So instead of crying to our paint pallets, we should be thanking the robots for showing us how awesome we are.
It’s more about recognizing that AI challenges us to go deeper, to move beyond just mimicking a style and figure out what truly makes our art meaningful.
Okay, but can we address the elephant in the room? This whole AI thing feels like a preview of our dystopian future.
Robots taking over the world.
Exactly. They’ll steal our jobs, our art, and eventually our souls.
Islana going to give us tips on building underground art bunkers? Not quite. She dedicates a whole chapter to making peace with the future, but no bunker blueprints, unfortunately.
So, what does she say? Should we just accept our robot overlords and start painting portraits of them?
She encourages us to separate our fears about societal change from AI itself. AI is a tool just like any other. It’s how we use it that matters. Okay, but who’s doing the choosing? Maybe we should be more worried about the humans behind the AI, you know, the ones programming these things.
That’s an interesting point, and it leads to the final anxiety. Swat Lana tackles redefining creative authority.
Oh, here we go. Time to stage an art world coup and reclaim our rightful place as the masters of creativity.
Not so fast. Swetlana uses an interesting analogy here. The parent child relationship.
Okay, I’m intrigued. How does that apply to AI and art?
She says some artists see AI as this all- knowing parent figure, this authority that stifles creativity. But instead of rebelling, she argues that we should focus on differentiating ourselves. So, it’s like kids figuring out their own path, developing their own sense of self, even if their parents are like super geniuses or something.
Exactly. It’s about embracing what makes us us, even if it’s messy and unpredictable. Those are the things AI can’t replicate.
Okay. I’m starting to see a pattern here. So, what Lana seems to be saying that instead of freaking out about AI, we should be using it as a tool for self-discovery.
Yeah. It’s like, thanks robots for holding up this giant mirror to my artistic soul.
Now, if you’ll excuse me, I have some deep thinking to do.
Exactly. So, it’s not about fighting the robots. It’s about figuring out how to dance with them.
And maybe even teach them a few new moves along the way. Who knows? Maybe AI will help us unlock some hidden creative potential we didn’t even know we had.
Hold on. Are you saying the robots are going to make us better artists?
It’s possible. Think of it as a forced evolution. AI is shaking things up, forcing us to confront some hard questions about what it means to be creative in a world where the lines between human and machine are blurring.
Okay, this is a lot to process. this
maybe we should give our listeners a moment to digest all of this before we you know drop the mic and blow their minds with the final takeaway.
Okay, so back to the big question. What does art even mean in this brave new world of AI generated masterpieces?
Right? Is it still art if a robot made it? Does it have the same soul? Do we even care?
It’s like what if we found out that the Mona Lisa was actually painted by a Roomba? Would it still be a masterpiece?
I mean it would definitely be a more impressive feat of engineering, but yeah, the whole concept of artistic genius kind of goes out the window. So, are we like doomed to become irrelevant, churning out fingerpaintings in a world where robots are the new Picassos?
Well, Sweat Lana doesn’t think so. She actually believes that AI can enhance human creativity. It’s like this weird alien force pushing us to evolve.
To evolve. Do we need to like merge with the machines to unlock some higher level of artistic consciousness? Yeah.
Become cyborgs or something?
Not exactly. It’s more about embracing the possibilities that AI opens up. using it as a tool to expand our creative horizons.
Okay. But how do we do that? Do we start by like teaching robots to appreciate the finer points of abstract expressionism?
Think of it as a collaboration, a partnership between human and machine where each brings their unique strengths to the table.
So instead of fearing the robots, we should be learning how to dance with them, maybe even teach them a few new moves along the way.
Exactly. It’s about exploring new forms of expression, experimenting with the boundaries of what art can be.
I mean, it’s a pretty big shift, right? We’ve been taught to value the human touch, the originality, the authenticity, and now we’re supposed to welcome these machine-made creations into the fold.
It’s definitely a challenge to our traditional notions of art. But Swat Lana argues that this is a necessary evolution.
So, like a growing pain for the art world.
Yeah. But one that could ultimately lead to something more dynamic, more diverse, more exciting.
Okay. So, we’re basically stumbling into this uncharted artistic territory. And the robots are helping us draw the map.
That’s one way to look at it. It’s a pretty wild time to be an artist.
Wild is one word for it. Terrifying is another. But maybe, just maybe, the robots are exactly what the art world needed.
Kick in the pants.
A wakeup call. A reminder that art is always evolving. They’re always pushing boundaries
and that our job as artists is to adapt and, you know, keep making cool s***.
And said it better myself. So that’s it for this deep dive into Swetlana AI’s learning code. We hope you’ve enjoyed this wild ride through the AI art revolution.
Thanks for listening and remember, don’t be a afraid to experiment, to collaborate, and to push the boundaries of what art can be. Until next time, stay curious, stay creative, and don’t let the robot steal your thunder.
OpenAI, here I come. I finally got verified as a builder, and my first GPT is officially public. With the GPT store coming out one of these days, I’m thrilled to see what will happen. Looks like this could become another income stream.
Building, conceptualized image (made with DALL-E 3)
What is my first public GPT you ask?
It’s FreudGPT!
I’ve renamed it to FREUD Dream Interpreter though, as using “GPT” at the end of it is apparently discouraged by the OpenAI guidelines. Go give it a try if you had a dream recently and you want to know what it means.
While trying to get verified I’ve run into one particular technical problem – getting OpenAI to approve my DNS record.
After several attempts to verify my website by adding a DNS txt entry it finally worked. Initially I’d press the “verify” button and got an error message, but then I deleted the domain and re-added it, added the txt again and it worked.
How To Become A Verified Builder On OpenAI
First things first, verification isn’t a big deal if you want to use your real name. It’ll get populated from your billing details and you’re good to go.
In my case, I wanted to get my website verified.
Here’s a step by step:
Obviously have an account with OpenAI. This is a no-brainer.
Go to ChatGPT, then to Settings, and then to “Builder Profile”.
Add a new domain, click “Submit”.
You’ll get a text string, copy it.
Go to your domain host and add a new DNS txt record. Depending on the host this will require an individual approach. I’m using WordPress and in that case you go to Settings, scroll down to “Change Website Address”, click on the “…” next to your domain, and then “Manage DNS”. From there you add a txt entry, pasting your OpenAI code.
Go back to ChatGPT Settings and click verify. If it doesn’t work straightaway, try again later, and if it doesn’t work on the next day, I’d suggest you delete and re-add your domain.
That’s it! I hope it’ll work for you.
As of my future plans: of course I have more GPTs up my sleeve.
And I recommend you try it too. GPTs are free to build, so it’s a win-only situation.
I love looking at unique characteristics of various directors’ styles. This exploration led me to an exciting experiment: I entered the prompt “two men staring at each other in shock” into Midjourney, and specified that the scene should be written “in the style of (director) “. The directors I chose for this experiment were, among others, Tarantino and Aronofsky, each renowned in their own right, but with starkly different storytelling styles.
The interesting thing is, in many of these images the men are not staring at each other, but instead at the camera. I guess Midjourney doesn’t care that much about the direction of the gaze. I also asked ChatGPT to write a paragraph for each director’s frame. Somehow ChatGPT’s output was surprisingly accurate, even though it couldn’t see the images.
Tarantino
Quentin Tarantino, known for his eclectic and often non-linear storytelling coupled with detailed character development, has a unique way of framing even the simplest of scenes. His narratives are filled with intense dialogues, heightened violence, and a touch of dark humor. When the AI was asked to interpret the prompt “two men staring at each other in shock” in the style of Tarantino, it generated some pretty
“Clarence and Sonny, two weathered gun-slingers, find themselves in a dingy bar, the air thick with tension. Sonny’s cigar smoke swirls in the air as his hand hovers over his holster, his eyes locked onto Clarence’s. Clarence, on the other hand, is a statue, his icy gaze never leaving Sonny’s. Suddenly, a roach skitters across the bar top, breaking the silence. Both men jump, their hardened facades crumbling as they share a look of pure, unadulterated shock. A moment of shared vulnerability in an otherwise deadly game of cat and mouse.”
Aronofsky
Darren Aronofsky, on the other hand, is known for his dark, psychological narratives, often dealing with themes of obsession and the human condition. His characters are complex, their inner turmoil often taking center stage. The following is the AI’s interpretation of the same prompt, “two men staring at each other in shock”, in the style of Aronofsky:
“Elijah and Jonah, two estranged brothers, stand in a dilapidated room, the remnants of their shared childhood echoing in the silence. The peeling wallpaper, the dusty pictures, all bear testament to happier times. When Jonah reveals a long-held family secret, the impact is immediate. The brothers stand, eyes wide, their shared shock reverberating through the room. Their personal demons are laid bare, and in this moment, their shared grief becomes a tangible presence.”
Emmerich
Roland Emmerich, the master of the disaster film genre, is known for his grand-scale narratives with a focus on survival amidst catastrophe. When ChatGPT was asked to interpret the same prompt “two men staring at each other in shock” in the style of Emmerich, the result was:
“Amid the rumble of an impending disaster, scientists Robert and Daniel stand in the control room, surrounded by flashing screens and blaring sirens. They’ve just realized that their calculations were not just a theory anymore – the catastrophe is imminent. They stare at each other, their faces pale, their shock mirroring the intensity of the situation. The countdown to survival has begun.”
Kubrick
Stanley Kubrick, revered for his meticulous attention to detail and symbolic storytelling, often explored themes of humanity and dystopia. When asked to interpret the prompt in the style of Kubrick, ChatGPT spun the following:
“In a sterile, futuristic room, two androids, Adam-7 and Eve-8, analyze a forbidden piece of human history – an abstract painting. They exchange a look of shock as they process the chaotic swirls of emotion, a concept alien to their programming. Their shock signifies the dawn of a new understanding, a breach in the otherwise perfect order of their existence.”
Wes Anderson
Wes Anderson, recognized for his whimsical narratives and distinctive visual style, crafts a world that is both familiar and surreal. His characters often find themselves in odd, humorous situations. The AI’s interpretation of the prompt in Anderson’s style reads:
“Inside the quirky, pastel-colored bakery, Francois and Gustave, two competitive bakers, witness their experimental ‘Volcano Cake’ unexpectedly erupt, splattering the pristine pink walls with chocolate lava. Their eyes meet in shock, only to be followed by a burst of laughter. Amid the mess, their rivalry melts away, replaced by the sheer absurdity of the situation.”
Spielberg
Steven Spielberg, one of the most influential directors in the history of cinema, is known for his heartwarming narratives and iconic cinematic sequences. His stories often center on themes of wonder, adventure, and the triumph of the human spirit. When the AI took on Spielberg’s style, it created:
“Inside a secret government lab, young prodigies Sam and Max accidentally activate an ancient alien device. A spectral figure flickers to life, its presence a testament to civilizations beyond their imagination. They stare at each other in shock, their eyes wide with fear, wonder, and the thrill of the unknown. Their childhood adventure has just taken a turn into the extraordinary.”
Gaspar Noé
Gaspar Noé, an Argentine filmmaker, is renowned for his provocative and visceral narratives, often exploring the darkest corners of human nature and society. When the AI was tasked with interpreting the prompt “two men staring at each other in shock” in the style of Noé, it generated:
“In a grimy underbelly of the city, drug addicts Kino and Remy share the last of their stash. As the hallucinogenic effects kick in, reality warps and the mundane becomes horrifying. They stare at each other, their faces twisted in shock, fear, and a dawning realization of their descent into oblivion.”