Private LLM's Avatar

Private LLM

@privatellm.ai

Local AI for Private, Uncensored Chat on iPhone, iPad, and Mac. No Cloud, No Tracking, No Logins. https://privatellm.ai

25
Followers
1
Following
27
Posts
01.02.2025
Joined
Posts Following

Latest posts by Private LLM @privatellm.ai

Preview
Detailed FAQs on Using Private LLM on iOS and macOS Have questions about Private LLM? Check out our FAQ and learn about the private, local AI chatbot that functions entirely offline on your iPhone and Mac, ensuring your data remains secure.

If you want the deeper details, we’ve got an FAQ entry here: privatellm.app/en/faq#Run-i...

31.01.2026 13:25 πŸ‘ 0 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Just to confirm, is it on iOS or Mac?

On iOS, it’s indeed a bug. We need to remove the Show when run toggle from the shortcut action because Apple doesn’t allow background GPU execution on iOS.

31.01.2026 09:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yes. LLM inference is memory-bound: memory capacity and memory bandwidth matter. For Macs, 64GB is a great sweet spot: you can run Llama 3.3 70B locally with GPT4o-level reasoning. Rule of thumb: run the largest model your Mac can fit.

29.10.2025 16:31 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Thanks for the shout-out! πŸ™Œ

Glad you’re enjoying Private LLM. The boost you’re seeing is because we’re not an MLX/llama.cpp wrapper like LM Studio or Ollama (slowllama?)

We quantize each model (OmniQuant/GPTQ) for Apple Silicon, so even low-RAM iPhones and Macs run fast and reason better.

29.10.2025 15:47 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Estimating Worst-Case Frontier Risks of Open-Weight LLMs In this paper, we study the worst-case frontier risks of releasing gpt-oss. We introduce malicious fine-tuning (MFT), where we attempt to elicit maximum capabilities by fine-tuning gpt-oss to be as ca...

Link to the paper: arxiv.org/abs/2508.03153

25.10.2025 13:08 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Just learnt that Private LLM has been cited in an AI safety paper. Because we let users download and use lots of uncensored models.

25.10.2025 13:08 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We are delighted to hear that. Please let us know if there’s any particular model you’d like to see in the app.

01.09.2025 05:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We just shipped an update. More coming soon

31.08.2025 17:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

OpenHands LM – Coding focused language model based on Qwen 2.5 coder:

* 7B (iOS + macOS) – 8GB RAM or more
* 32B (macOS only) – 32GB RAM minimum

Handles bug fixing and code refactoring tasks. Trained on real GitHub issues via reinforcement learning.

23.04.2025 20:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Meta-Llama 3.1 8B SurviveV3 (3-bit iOS / 4-bit macOS)

Wilderness survival assistant, offline. Knows how to build shelters, find water, navigate terrain, etc. 



Runs on any iOS/Mac device with 8GB+ RAM β€” even off-grid.

23.04.2025 20:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Llama 3.1 8B UltraMedical (3-bit iOS / 4-bit macOS)

Biomedical assistant for med students, researchers, and clinicians.
Answers board-exam style questions, explains research findings, and supports clinical reasoning β€” privately.

Runs on 8GB+ RAM.

23.04.2025 20:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Perplexity’s R1 1776 Distill Llama 70B


Post-trained to eliminate refusal behavior on politically sensitive topics β€” while preserving full reasoning ability.

Built to refuse censorship: open dialogue, independent thought, and the right to answer freely.

macOS only. Needs 48GB+ RAM.

23.04.2025 20:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

Amoral-Gemma3-1B-v2 & gemma-3-1b-it-abliterated

Uncensored 4-bit Omniquant quantized fine-tunes of Gemma 3 1B.
For users who want unrestricted conversations, roleplay, and truth-seeking without moral filters. Fast and small. iOS and macOS.

23.04.2025 20:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Gemma 3 1B IT (4-bit QAT)
Instruction-tuned.

Multilingual. Full 32K context on iPhones with β‰₯ 6GB RAM.



Ideal for writing, Q&A, summarization β€” in 140+ languages.



Small enough to run on any supported iOS or Mac device.

23.04.2025 20:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Private LLM v1.9.7 (iOS) and v1.9.9 (macOS) are out.

This update brings Gemma 3 1B to all devices β€” iPhone, iPad, Mac.
And Perplexity’s R1 1776 Distill Llama 70B to beefy Macs for uncensored reasoning.


Plus new models for coding, survival, and biomedicine β€” all local, all private. 🧡

23.04.2025 20:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

πŸ› οΈ We've fixed a pesky crash that was affecting some newer models on older versions of macOS like Sonoma.

17.02.2025 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ‘€ Also, we've updated our lineup by adding support for both 3-bit and 4-bit OmniQuant quantized versions of the EVA LLaMA 3.33 70B v0.1 model by @Nottlespike. Note that we've deprecated the previous version, EVA LLaMA 3.33 70B v0.0

17.02.2025 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

For Apple Silicon Mac users with 64GB or more RAM, we still recommend using the 4-bit OmniQuant-quantized version of 70B models.

17.02.2025 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

πŸ’ͺ Power users, rejoice! The 5 new 3-bit OmniQuant-quantized 70B models on Mac from Private LLM v1.9.8 are here. These models consume around 5GB less RAM than their 4-bit counterparts, making them ideal for Apple Silicon Macs with 48GB of RAM.

17.02.2025 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

πŸ“ Now, with Private LLM, you can see the context length right in the model quick switcher! This little upgrade makes a big difference, helping you choose the perfect model for your conversation or task at a glance.

17.02.2025 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

✍️ Unleash your creativity with the Gemma 2 iFable 9B model from iFable! This top-tier creative writing model works on iPad Pros with 16GB of RAM or any Apple Silicon Mac with 16GB+ RAM. No other local LLM app lets you run 9B or 14B models on iOS like Private LLM can.

17.02.2025 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

- Dolphin 3.0 Llama 3.1 8B - For iOS devices with 8GB or more RAM, like the iPhone 15 Pro or newer

These are currently the best uncensored LLMs that can fit in your pocket, no holds barred!

17.02.2025 22:18 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

- Dolphin 3.0 Llama 3.2 3B - For those with 6GB+ RAM on their iOS devices or any Apple Silicon Mac
- Dolphin 3.0 Qwen 2.5 0.5B, 1.5B, 3B - Compatible with nearly all modern iPhones (iPhone 12 or newer) and Macs

17.02.2025 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

🐬 Say hello to the uncensored freedom of Dolphin 3.0 models! From Cognitive Computations, these models are your ticket to unfiltered AI conversations.

- Dolphin 3.0 Llama 3.2 1B - Perfect for iPhones/iPads with 4GB+ RAM or any Apple Silicon Mac

17.02.2025 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

Private LLM v1.9.6 for iOS and v1.9.8 for macOS are here with 12 new models! From uncensored chats to creative writing, there's something for everyone. Let's dive in! 🧡

17.02.2025 22:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Thank you, @soldaini.net! And huge congratulations on launching Ai2 OLMoE - love what you’re doing for local AI!

17.02.2025 21:04 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

We're excited to join the BlueSky community!

01.02.2025 15:42 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0