👋 Tomorrow’s Tech, Delivered Today

Hi! Welcome to the 9th edition of the TomorrowToday newsletter.

We’re here to decode the AI chaos so you don't have to. Think of us as your friendly neighbourhood tech translators - we cut through the chaos, translate the jargon, and spotlight new AI tools that matter for founders, builders, and curious minds.

Buckle up, because the future's moving fast and we're here to make sure you don't get left behind! ⚡

If you enjoyed today’s newsletter, please forward it to a friend & subscribe by following this link.

~7 mins read

🗞️ News Flash

🧠 GPT-5 is here, and it is free to use!

/Benchmark /Productivity

OpenAI just simplified the AI game with GPT-5, replacing their confusing lineup of models with one unified family that's smarter, faster, and accessible to everyone. The model family includes three variants - GPT-5, GPT-5 Pro, and GPT-5 Mini - with the base GPT-5 now available to free users under usage limits (higher limits for Plus subscribers, unlimited for Pro users).

Here's what makes it special: GPT-5 uses a real-time router that switches thinking on and off based on the task complexity, delivering state-of-the-art performance on coding, writing, math, and health benchmarks. The Pro version ($200/month) thinks longer with scaled parallel test-time compute to provide the most comprehensive answers, while GPT-5 Mini kicks in when free and Plus users hit rate limits.

OpenAI claims these models hallucinate less, are more honest about their limitations, and communicate clearly when they can or cannot handle a task. It's essentially giving everyone access to a PhD-level assistant that can tackle elite problem-solving tasks.

The big picture? This move simplifies the user experience and democratises access to cutting-edge AI. The only question is how long OpenAI can maintain this edge with Anthropic, Google, and Chinese AI giants breathing down their necks in this fast-moving race.

Oh, but it’s worth mentioning that despite all this brilliance, apparently GPT-5 still can't count the Ts in "Tennessee" properly. 🤷‍♂️

🔗 Read more here: - OpenAI Introducing GPT-5

Real-life use case: Complex analysis, advanced reasoning, creative writing, and coding across all skill levels—from free users to enterprise teams.

🌍 Google Genie 3: Step inside any world you can imagine

/Create /Image /Video

Forget everything you know about world-building. Google DeepMind just released Genie 3, and it's absolutely bonkers. Type "create a mystical forest with floating islands" and you get a fully navigable 3D world running at 24fps in 720p. Not a static image - an actual world you can walk around in and interact with for minutes at a time.

The crazy part? You can step inside existing content. Upload a screenshot of Edward Hopper's "Nighthawks" painting, and suddenly you're walking around that iconic diner in 3D. Feed it a drone video, and you can control the camera angle as if you're actually flying. It's like having a holodeck, but powered by text prompts.

Genie 3 supports "promptable world events" too - change the weather mid-exploration, introduce new characters, or alter objects on the fly. The AI maintains visual and physical consistency with a form of short-term memory that remembers your actions and environmental changes.

Right now, it's limited to researchers and creators due to computational costs, but public access is expected within 1-2 years. The potential applications are mind-blowing: architecture, gaming, education, AI training for robotics, disaster preparedness, and emergency training.

Real-life use case: Create immersive architecture plans, training environments, prototype game worlds, or just blow people's minds by stepping inside famous artwork.

🔓 OpenAI finally lives up to its name (sort of)

/Benchmark /Integration

In a move that shocked absolutely no one who's been watching the open-source AI movement, OpenAI decided to actually be "open" for once. They've released gpt-oss-120b and gpt-oss-20b: two state-of-the-art open-weight models that you can download, modify, and run on your own hardware.

The bigger model (120b) achieves near-parity with GPT-4o-mini while running on a single 80GB GPU. The smaller one (20b) matches GPT-3-mini performance but runs on edge devices with just 16GB of memory. Both models excel at reasoning, tool use, and function calling—basically everything you'd want from a modern AI assistant.

Why the sudden generosity? Competition is fierce, and keeping everything locked behind APIs isn't sustainable when Meta, Google, and Chinese companies are releasing capable open models left and right. Plus, OpenAI needs developers in their ecosystem, and nothing builds loyalty like giving away powerful tools for free.

Early partners like AI Sweden, Orange, and Snowflake are already using these models for on-premises deployment and specialised fine-tuning. It's a smart move that acknowledges the reality: the future of AI isn't just about the biggest, most expensive models - it's about giving everyone access to capable AI that runs wherever they need it.

🔗 Read more - Introducing gpt-oss

Real-life use case: Build custom AI applications without subscription fees, fine-tune models for specific tasks, or run powerful AI on your own infrastructure without sharing sensitive data.

💡 Curiosity Corner

In this section, we aim to spotlight an incredible AI tool or use case and guide you on how you can try it.

This week's challenge: Turn your company's PDFs into AI gold (and look like a genius)

⚠️ Warning: This is our most technical tutorial yet, but it's also the one with the biggest potential to make you look like an absolute wizard at work.

Want to be the person who finally makes AI useful with your company's actual data? Here's how to transform those dusty PDF archives into a searchable, AI-powered knowledge base that'll have your boss asking, "How did you even do that?"

Most organisations have thousands of PDFs gathering digital dust - reports, manuals, policies, research papers. The problem? PDFs aren't just text. They're complex layouts with tables, diagrams, images, and text that don't always read left-to-right. Regular PDF extractors fail miserably, giving you garbled nonsense instead of clean data.

Enter Dolphin, ByteDance's open-source PDF parsing wizard.

Dolphin is a framework that converts PDFs into structured formats (Markdown, HTML, JSON) that LLMs can actually understand. It uses a two-stage "analyse-then-parse" pipeline that detects reading order and processes each element with specialised prompts.

Here's your step-by-step guide to PDF mastery:

Step 1: Test the waters

  • Upload a complex PDF from your company (with charts, tables, weird layouts)

  • Watch as Dolphin converts it into clean, structured text

  • If you're impressed, continue to Step 2

Step 2: Set up your own Dolphin

Step 3: Process your company's PDFs

  • Start with 5-10 important documents

  • Run them through Dolphin to get clean Markdown/JSON output

  • Organise the output into logical chunks (500-1000 words each)

Step 4: Build your AI knowledge base

  • Upload the processed data to ChatGPT, Claude, or build a custom solution

  • Test with company-specific questions

  • Fine-tune your setup based on results

Step 5: Become the office hero

  • Demo by asking the AI about obscure company policies

  • Show how it can summarise quarterly reports instantly

  • Watch everyone's jaws drop when it finds information across hundreds of pages

Pro tip: Start with HR policies or technical manuals - areas where people constantly ask the same questions. Nothing impresses management like an AI that can instantly answer "What's our remote work policy?" or "How do I expense international travel?"

This isn't just showing off - you're creating genuine business value by making institutional knowledge searchable and accessible. Plus, you'll be the go-to person when the company inevitably wants to "implement AI everywhere".

📜 AI Dictionary

AI is full of jargon, and we’re here to decode it. Each week, we’ll give you a plain-English definition of a buzzy term you’ve probably seen (but never fully understood).

Parameters - noun

The millions (or billions) of tiny decision-makers inside an AI model that determine how smart it is. Think of parameters as neural pathways in a digital brain—each one learns to recognise patterns and make predictions. GPT-5 has hundreds of billions of these working together to generate responses. More parameters usually mean smarter AI, but also higher costs and slower processing. It's like having a massive committee where everyone votes on what word comes next, except they do it billions of times per second.

Weird & Wonderful

In this section, we aim to spotlight something weird & wonderful in the world of AI.

The intern who accidentally leaked ChatGPT-5 🤯

[Thanks for sharing this with us, Tom Edwards!]

Picture this: You're an OpenAI intern, probably getting coffee and doing basic tasks, when you accidentally upload GPT-5's entire specification document to GitHub. Not the internal repository. The public one. Where anyone can see it.

That's exactly what happened last week, giving the entire AI community an unexpected Christmas morning. For several hours before anyone at OpenAI noticed, Reddit was going absolutely mental dissecting leaked details about GPT-5's four variants: the flagship model, GPT-5-mini for cost-effective deployments, GPT-5-nano for ultra-low latency, and GPT-5-chat for enterprise conversations.

The leak revealed everything: enhanced "agentic" capabilities, improved reasoning functions, coding improvements, and those hilariously inconsistent basic math skills we mentioned earlier. It triggered widespread debate about whether this was a genuine intern mistake or a brilliant guerrilla marketing strategy (spoiler: it was definitely a mistake).

OpenAI's response? They quietly removed the GitHub content without denying its authenticity, which only made everyone more excited. The poor intern? Rumour has it they're doing fine, though they're probably triple-checking which repository they're uploading to these days.

It's the perfect reminder that in our hyper-connected world, the biggest tech scoops often come from the most human mistakes. Sometimes the most valuable insights aren't from polished press releases - they're from nervous interns who mix up their Git commands (and, of course, from reading this newsletter).

The real lesson? Even at the cutting edge of artificial intelligence, we're all still beautifully, chaotically human. And we love that.

We’d like to ask a favour 🤝
If this email lands up in your Promotional or Spam folder, please move it to your Primary inbox. We’re working hard to bring you the best content weekly, and your support is truly appreciated. Thanks!

Thanks for reading TomorrowToday! We’d love to hear from you:

➡️ What would you like us to cover next?
➡️ Have a tool or topic we should feature?

We’re building this with (and for) you. 🚀
See you next Tuesday 👋

Keep Reading