prompt 🤖 AI News – Telegram
prompt 🤖 AI News
8.92K subscribers
50 photos
22 videos
1 file
67 links
Welcome to @prompt, your go-to source for AI insights, breakthroughs, and tools shaping the future of intelligence.


Contact: @LightEarendil
Download Telegram
🧠 Altman and Masa fund a new Bell Labs

Sam Altman and Masayoshi Son are backing Episteme, a San Francisco lab founded by 27-year-old Louis Andre to reinvent scientific research.

The project will fund top scientists with salaries and equity instead of grants, letting them focus on breakthroughs in AI, energy, and biotech.

Andre calls it a “third path” between academia and startups, built to restore the spirit of Bell Labs and Xerox PARC.
4👍3🔥2🤝1
🕵️‍♂️ Your chatbot might be leaking secrets

Microsoft researchers found a new cyberattack called “Whisper Leak” that can guess what you’re talking about with your AI, even when the chat is encrypted.

The trick? 📡 It reads traffic patterns like packet size and timing, not your actual words. In tests, it reached 98% accuracy on models from OpenAI, Anthropic, Google, AWS, and others.

That means someone watching your network could tell if you’re discussing politics, protests, or human rights, without breaking encryption.

⚙️ Companies like Microsoft, OpenAI, Mistral, and xAI have added defenses: random padding, token batching, and fake data packets to confuse attackers. These fixes work but can slightly slow responses.

💡 Even encrypted AI chats can reveal what you’re talking about, just not the words themselves.
👍53😁2👏1
🧠 OpenAI just dropped GPT 5.1 for all ChatGPT users

The update arrives with two upgraded modes: Instant and Thinking. Both hit harder, respond more naturally and follow instructions with fewer slips.
Instant now adjusts its reasoning time depending on the difficulty of the prompt. That bump pushes its math and coding scores higher while keeping chat speed fast.

Thinking becomes the heavy hitter.
It changes its deliberation per question so simple tasks feel snappy and complex ones get deeper analysis with cleaner explanations.

ChatGPT also gains new tone presets like Default, Professional, Friendly, Candid, Quirky and Efficient. A small group of users can test new sliders to tune brevity, warmth and emoji use. Changes apply instantly in all ongoing chats.

Paid users get access first. Free users are next in line. Polaris Alpha on OpenRouter is confirmed to be GPT 5.1.
7👌3💯1
🍌 Nano Banana is kinda insane right now

Google’s new image model is doing stuff most generators just can’t. People are throwing absurd prompts at it and it keeps following every rule: complex edits, camera specs, JSON character sheets, multi-step instructions, you name it. The prompt adherence is way above ChatGPT’s image model.

It’s not great at style transfer, but if you can describe it, Nano Banana usually nails it with almost no slop.

Click here want the full deep dive.
12👍2
🤖 Google is gearing up to launch Nano Banana Pro next week

Google is expected to roll out Gemini 3 and the new Nano Banana Pro next week, and all signs point to a major upgrade. A hidden promo inside Google Vids mentions “quickly generate beautiful images… using Nano Banana Pro,” which basically confirms a jump in image quality and resolution.

The Pro label strongly suggests it’s powered by Gemini 3 Pro, not the Flash variant behind the current Nano Banana. If true, Google is clearly aiming at high-fidelity visual generation and bringing those improvements across Vids, Slides, and its whole creative suite.

For creators and teams, this could mean sharper output, better control, and production-grade visuals baked into Google’s ecosystem.

🔥 All eyes on the week of November 22.
12🔥3👍1
🤖⚡️ Gemini 3 is here and Google just wired it into Search

Google dropped its new flagship model and plugged it straight into the new AI Mode in Search.

It hits record scores, handles text, images, video, code, and comes with a million-token context. It’s also more direct and less sugarcoated.

Google launched Antigravity too, an agent platform where the AI can plan and execute full software tasks.
👍114
🔥 Google is sweating right now

At an internal all-hands they dropped it bluntly:
they need to double AI capacity every 6 months or they won’t keep up.

Pichai says 2026 will be “intense”. Translation: demand is exploding faster than Google can build data centers.
Even Veo could have had way more users, but they couldn’t open access because they literally ran out of compute.

The real message: the bottleneck isn’t the models… it’s everything underneath.
Google’s flooring the gas, but the road is cracking under the wheels.
9
⚡️ AI vs LNG: who gets the gas?

US power demand is exploding because of AI. Solar can’t keep up. Batteries can’t keep up. The grid is cooked.

So the US is basically sprinting back to natural gas as the only thing that can feed the AI beast right now.

Hyperscalers want cheap gas for datacenters. LNG exporters want the same gas to ship overseas. They’re about to collide. Hard.

The next decade = gas-fired intelligence. Grow baseload first, argue about renewables later.

Bottom line:
AI demand is rewriting the entire US energy system. And the gas wars are just getting started.
113
DeepSeek just swung at GPT 5 and Gemini 👀🔥

China isn’t slowing down. DeepSeek dropped two new models and the confidence is wild. The V3.2 claims GPT 5 level reasoning and can think while using tools like search, calculators, and code. Actual multitasking.

Then comes Speciale, a math-focused model that matches Gemini 3 Pro and hits Olympic-level scores on math and informatics tests. Ridiculous power.

The message is clear. DeepSeek wants the lead with fast, open models that punch way above their weight. Every release adds pressure on Google and OpenAI.
7
IBM just said the quiet part out loud about AI datacenters ⚡️💸

Everyone is pouring trillions into AI compute like it’s guaranteed to pay off. IBM’s CEO just ran the math and basically said: no chance. At today’s costs, the numbers don’t add up at all.

His take is harsh. Filling a single 1 GW datacenter costs around 80 billion dollars. Scale that to the global AI race and you land near 8 trillion. To justify it, the industry would need 800 billion in profit just to cover interest. Nobody is close.

And the kicker? He puts the chance of reaching AGI with current tech at 0 to 1 percent. Meanwhile companies keep asking devs to “implement AI” without even knowing why, and half the market is running on FOMO.

Feels like everyone is sprinting into a wall hoping it magically becomes a door.
10
OpenAI just went code red 🚨

Sam Altman told the team to drop everything and fix ChatGPT. Speed, reliability, personalization, better answers. All top priority. New products pushed back.

Why? Google’s new Gemini spike ⚡️ closed the gap fast. User growth is exploding, Anthropic is rising, and OpenAI is burning cash while betting billions on data centers.

Altman says a new reasoning model lands next week and already beats Google’s latest. The AI race is tightening and OpenAI knows it.
Please open Telegram to view this post
VIEW IN TELEGRAM
14
@Prompt:
Restore and colorize the uploaded historical black-and-white photograph while keeping the overall scene, pose, and subjects consistent with what is visible. Do not change the composition, but you may reconstruct missing or unclear areas when the image is too damaged to determine exact detail.

Perform a full restoration across the entire image, including:

repairing heavy fading, stains, cracks, scratches, and missing sections

reconstructing faces, clothing, and background features based on the shapes, silhouettes, and visible cues in the original

clarifying all people, objects, fabrics, and surroundings with natural detail

restoring edges and textures as they would realistically appear, without modern stylization

maintaining the historical era feel and keeping proportions natural

preserving identity and expressions as faithfully as the surviving image allows

When details are too obscured to recover directly, infer them in a historically plausible and realistic way, staying consistent with the likely clothing, hair, and environment of the period.

Apply historically accurate colorization: muted colors, natural skin tones, subtle era-appropriate hues. Avoid modern brightness or saturation.

Keep lighting and general scene structure consistent with the original photograph.

Final output should feel like a faithful reconstruction of the moment captured — restored, completed, and colorized while respecting the historical authenticity of the photo.
146👍15👏4💯3🔥2
🎧 OpenAI is betting that the future of tech won’t be on a screen, but in your ears.

They’re rebuilding their audio AI and working on an “audio first” device that talks and reacts like a real conversation partner. At the same time, Meta, Google, Tesla and a wave of startups are all pushing toward a world where we don’t tap or swipe. We just talk.

With Jony Ive shaping the hardware, the goal is clear: less screen time, more human interaction.

The future won’t be seen. It will be heard.
Please open Telegram to view this post
VIEW IN TELEGRAM
18👍3🔥3
💻🧯 PC makers are finally reading the room

At CES 2026, Dell basically admitted what everyone feels but no one in marketing wants to hear: people are not buying laptops because of AI stickers or NPUs. Every new Dell and Alienware machine still has AI hardware inside, but the pitch has shifted back to performance, thermals, screens, actual use cases.

AI hype is staying under the hood where it belongs. Consumers want a good laptop, not a lecture about neural engines.
14🔥3🤝1
🩺 OpenAI is quietly moving ChatGPT closer to your real life

With ChatGPT Health, users can link medical records and wellness apps so answers are grounded in their own health data. It’s not a doctor and it won’t diagnose you, but it aims to help you understand your information better and ask smarter questions.

Health chats live in a separate space, stay private, and aren’t used to train models. ChatGPT wants to be a personal assistant that follows you beyond search and work, while trying to earn trust where it matters most.
Please open Telegram to view this post
VIEW IN TELEGRAM
13👎1
It sounds like a joke, but Google Research claims a prompting trick that actually works: paste your prompt twice, exactly the same, before sending it 🤖📋

In a recent preprint, they show that turning <QUERY> into <QUERY><QUERY> improves results when the model is not asked to reason. Across many benchmarks and models, repetition wins most comparisons and never clearly loses.

They also report that outputs don’t get longer and latency usually stays the same, since the extra cost is in reading the prompt, not generating the answer. Important caveat: once you ask the model to “think step by step,” the effect is mostly neutral. And padding the prompt with junk doesn’t work, so it’s not just about more context.

If you try it, repeat the entire prompt, not just the question. Just keep in mind you’re doubling input tokens, so you pay more and use up context faster.
87👍5🔥2😁1
🛌🤖 Imagine if a single night of sleep could reveal your future health.

That’s what SleepFM is claiming: a model trained on 585,000 hours of polysomnography from 65,000 people. The paper says it can predict 130 diseases, with some big numbers: all-cause mortality (0.84), dementia (0.85), heart attack (0.81), stroke (0.78) (metrics like C-Index/AUROC).

It’s still not as simple as wearing a smartwatch. This is lab-grade data: brain, heart, and breathing signals captured overnight.
Please open Telegram to view this post
VIEW IN TELEGRAM
10🔥3👏2
ChatGPT is still the nº1 AI site, but its lead is shrinking fast. New Similarweb numbers show ChatGPT dropping to 64.6% global traffic share while Google’s Gemini jumped to 22%, and that’s the first time since 2023 ChatGPT has been under 65%.

The story isn’t "ChatGPT is falling apart", it’s Google finally turned distribution into growth. Gemini is everywhere in Google’s ecosystem, and that kind of default placement is brutally effective. If this trend holds, the market won’t be one king anymore, it’ll be two giants and a long tail fighting for scraps.
8
RSA just hit 59.31% on ARC-AGI-2 and the method is dumb simple

Generate reasoning chains in parallel. Split into random subsets. Ask the model to merge each subset into one better chain. Repeat.

No code scaffolding. No multi-agent systems. Just the model arguing with itself.

With Gemini 3 Flash it beats Gemini DeepThink at 1/10th the cost 💸. Nearly matches Poetiq with a fraction of the complexity.

LLMs are inconsistent but weirdly good at picking the best parts from multiple attempts. RSA just turns that into a loop.

github.com/HyperPotatoNeo/RSA-ARC
Please open Telegram to view this post
VIEW IN TELEGRAM
6👍1
Google just released Genie, a real time world simulator you can actually walk through 🌍 Text or images turn into explorable worlds with object permanence and environments that keep expanding instead of collapsing.

This isn’t a game engine or a video model. It’s a foundation model for worlds 🧠 Today you move and look around. Tomorrow, those worlds respond.

Learn more: https://labs.google/projectgenie
2🔥1💯1