GPT-5.4 and GPT-5.4 Thinking are now available in Perplexity for Pro and Max subscribers.
@perplexity-ai.zpravobot.news.ap.brid.gy
Curiosity changes everything. Download our free app on iOS, Mac, Windows, and Android: pplx.ai/download π bridged from β https://zpravobot.news/@perplexity_ai, follow @ap.brid.gy to interact
GPT-5.4 and GPT-5.4 Thinking are now available in Perplexity for Pro and Max subscribers.
Introducing Voice Mode in Perplexity Computer.
You can now just talk and do things.
This week we launched Perplexity Computer, the most powerful AI product yet. We are just warming up.
Join us March 11 to see whatβs next.
π¬ https://nitter.net/perplexity_ai/status/2027444498926825900
Perplexity APIs are now in hundreds of millions of Samsung devices and 6 of the Mag 7. And we just released an API for search embeddings that outperforms Google.
Join us in San Francisco to see whatβs next, hear from Perplexity founders, and connect with top devs.
π¬ [β¦]
π¬ Video
Introducing Ask. Perplexity's first developer conference.
Weβre reserving some seats for standout devs who arenβt yet on our radar. Apply here: events.perplexity.ai/ask2026
π¬ https://nitter.net/perplexity_ai/status/2027444475510038694
Both pplx-embed-v1 and pplx-embed-context-v1 are available at 0.6B and 4B parameter variants.
Read the paper: arxiv.org/abs/2602.11151
Both are available on Hugging Face (under MIT License) and through the Perplexity API:
https://docs.perplexity.ai/docs/embeddings/quickstart
We also built PPLXQuery2Query and PPLXQuery2Doc
These internal webβscale benchmarks with 115K real queries evaluated against 30M documents drawn from over 1B pages.
On the ConTEB contextual retrieval benchmark, pplxβembedβcontextβv1β4B (INT8) beats voyageβcontextβ3 (79.45%) and Anthropic Contextual (72.4%).
Our 4B INT8 model matches or beats top MTEB Multilingual v2 retrieval models like Qwen3βEmbeddingβ4B and Geminiβembeddingβ001 while storing 4x more pages per GB.
The binary variant stores 32 times more pages per GB while maintaining excellent retrieval performance.
Perplexity ππ¬ @thorryyyy_:
Today we're releasing two embedding model families, pplx-embed-v1 and pplx-embed-context-v1.
These SOTA embedding APIs are designed specifically for real-world, web-scale retrieval.
pplx.ai/pplx-embed
https://nitter.net/perplexity_ai/status/2027094981161410710
This launch is part of a broader partnership with Samsung, with Samsung Internet up next.
Samsung will use our APIs for browser control and offer Perplexity as an optional default search engine, similar to how Mozilla lets you choose search providers.
pplx.ai/pplx-samsung
Their new AI assistant, Bixby, uses Perplexity API's for complex, webβbased, or generative queries on 800M devices in 2026.
Bixby handles all on-device actions while routing any research, questions, and tasks to Perplexity in the background.
This is the first time Samsung has given system OS-level access to an app that isnβt made by them or Google.
Galaxy users will be able to choose from multiple AIs on one device instead of being locked into a single assistant.
Weβve partnered with Samsung to bring Perplexity directly into the upcoming Galaxy S26.
Every new S26 will ship with Perplexity built in as a systemβlevel AI, with its own wake word: βHey Plex.β
Available on web for Max subscribers today, and coming soon to Perplexity Pro and Enterprise.
perplexity.ai/computer
π¬ https://nitter.net/perplexity_ai/status/2026695805252547008
Perplexity Computer uses usageβbased pricing with optional subβagent model selection and spending caps.
Choose different models for different subβagent tasks and control token spend.
Max users get 10,000 credits per month included with their subscription.
Weβre also giving a oneβtime bonus of [β¦]
π¬ Video
Go from a single task to hundreds of active projects.
Clear your toβdo list, move active projects forward, or kick off a new side project.
Follow our live stream of curated Computer tasks:
perplexity.ai/computer/live
π¬ https://nitter.net/perplexity_ai/status/2026695781563212134
π¬ Video
Perplexity Computer is what a personal computer in 2026 should be.
Itβs personal to you, remembers your past work, and is secure by default.
Hundreds of connectors, persistent memory, files, and web access, all built on top of Perplexity infrastructure.
π¬ [β¦]
[Original post on zpravobot.news]
π¬ Video
Perplexity Computer is massively multi-model.
Computer orchestrates models to run agents in parallel, leveraging Opus to match each task to the model best suited for it.
In total, Computer can route work across 19 different models.
π¬ https://nitter.net/perplexity_ai/status/2026695629075001757
π¬ Video
Perplexity ππ¬ @Alignment100:
Introducing Perplexity Computer.
Computer unifies every current AI capability into one system.
It can research, design, code, deploy, and manage any project end-to-end.
https://nitter.net/perplexity_ai/status/2026695550771540489
π¬ Video
Perplexity ππ¬ @comet:
New voice mode upgrades are rolling out today across Perplexity and Comet for all users.
https://nitter.net/perplexity_ai/status/2026389166049865751
Gemini 3.1 Pro is now available to all Perplexity Pro and Max subscribers.
Pre-order for Comet on iOS is now live in the Apple App Store.
Sign up now.
https://apps.apple.com/us/app/comet-ai-personal-assistant/id6748622471
Claude Sonnet 4.6 is now available to all Perplexity Pro and Max subscribers.
Perplexity Deep Research now runs on Opus 4.6, improving our existing state-of-the-art results on internal and external benchmarks even further.
Available now for Max users, and gradually rolling out to Pro users. πβοΈ
https://xcancel.com/perplexity_ai/status/2020962433591017492
Perplexity πππ¬ own post:
Opus 4.6 is now available on Perplexity for Max subscribers.
Try it in Model Council to compare it with other frontier models. πβοΈ
https://xcancel.com/perplexity_ai/status/2019853661149520333
Learn more about Model Council on our blog:
πβοΈ
https://www.perplexity.ai/hub/blog/introducing-model-council
View results side by side, get clear insight into the evidence and disagreements, and see what each model uniquely contributes to the final answer. πβοΈ
https://xcancel.com/perplexity_ai/status/2019445019116269952
Introducing Model Council in Perplexity.
Run three frontier models at once, compare outputs, and get a more accurate, higherβconfidence answer.
Available now on web only for Perplexity Max subscribers. πβοΈ
https://xcancel.com/perplexity_ai/status/2019444886114824219
Our DRACO Benchmark is fully open-source and we're releasing the benchmark, rubrics, and methodology today.
To learn more about methodology and detailed results, read the full paper: πβοΈ
The dataset is available on Hugging Face: πβοΈ
https://pplx.ai/draco-paper