Libreware – Telegram
Libreware
5.9K subscribers
351 photos
26 videos
113 files
710 links
Libreware Software Library

📡 t.me/Libreware

★ Send us your suggestions and menaces here:
https://news.1rj.ru/str/joinchat/nMOOE4YJPDFhZjZk
Download Telegram
#bootloader unlock wall of shame

https://github.com/melontini/bootloader-unlock-wall-of-shame

updated guide, check it before buying a #phone

Over the past few years, a suspicious number of companies have started to "take care of your data", aka block/strictly limit your ability to unlock the bootloader on your own devices.

While this may not affect you directly, it sets a bad precedent. You never know what will get the axe next: Shizuku? ADB? Sideloading? I thought it might be a good idea to keep track of bad companies and workarounds.

#android
🔥21👏106👍3❤‍🔥1👀1🗿1
🔥4
#Movuan #PINE64

The Movuan project was started by community member lxb and announced in a forum post as an alternative to mobile distributions using the #systemd init system. Thanks to being forked from Mobian, the project makes use of modified Mobian debos to build it’s images.

One of the modifications that lxb makes is an optional noscript which can customize a Movuan image to install extra software like AndroidImpEx for importing contacts and sms messages from an Android phone, Ungoogled Chromium, local caching DNS (bind) tunnelled through TLS (stubby) to privacy minded servers and an inbuilt adblocker through a caching proxy (squid). These modifications are a personal preference of lxb’s but anyone is free to use them to help improve their privacy.

https://pine64.org/2025/08/27/august_2025_movuan/
🔥182
Maid - Mobile Artificial Intelligence Distribution

Maid is a cross-platform free and an open-source application for interfacing with llama.cpp models locally, and remotely with Ollama, Mistral, Google Gemini and OpenAI models remotely.

-Choose from A wide range of models that runs LOCALLY and access remote models via api key!
-Text based output
-Image Generation (Selected Models only)
-No video or short clips generation yet
-Voice generation on selected models (Not tested)
-Setting model parameters
-Setting system prompt (Making the model behave/generate output in a certain way).
-And more.

Get it on

Github - https://github.com/Mobile-Artificial-Intelligence/maid/releases/latest

Fdroid - https://f-droid.org/packages/com.danemadsen.maid/

Spystore - https://play.google.com/store/apps/details?id=com.danemadsen.maid

*Don't clear CACHE OF THE APP AND EXCLUDE IT FROM SYSTEM'S AUTO CACHE CLEANING as app stores everything in device cache*

Follow @nogoolag and @libreware for more
#ai
👍6🤮61🤔1
Libreware
Photo
Maid is heating up my phone and draining battery. I don't recommend it for lower end phones. If snapdragon 8 gen 2 is behaving like this, lower end phones will fail to run this app

Anyway, it runs without internet!
🥴10👍6👀3🔥2💩1🙏1🌚1🖕1
ChatterUI - A simple app for LLMs

https://github.com/Vali-98/ChatterUI

https://news.1rj.ru/str/chatterui

ChatterUI is a native mobile frontend for LLMs.
Run LLMs on device or connect to various commercial or open source APIs. ChatterUI aims to provide a mobile-friendly interface with fine-grained control over chat structuring.

Features:
Run LLMs on-device in Local Mode
Connect to various APIs in Remote Mode
Chat with characters. (Supports the Character Card v2 specification.)
Create and manage multiple chats per character.
Customize Sampler fields and Instruct formatting
Integrates with your device’s text-to-speech (TTS) engine

Usage
Download and install latest APK from the releases page.
iOS is Currently unavailable due to lacking iOS hardware for development

Local Mode
ChatterUI uses a llama.cpp under the hood to run gguf files on device. A custom adapter is used to integrate with react-native: cui-llama.rn
To use on-device inferencing, first enable Local Mode, then go to Models > Import Model / Use External Model and choose a gguf model that can fit on your device's memory. The importing functions are as follows:
Import Model: Copies the model file into ChatterUI, potentially speeding up startup time.
Use External Model: Uses a model from your device storage directly, removing the need to copy large files into ChatterUI but with a slight delay in load times.
After that, you can load the model and begin chatting!
Note: For devices with Snapdragon 8 Gen 1 and above or Exynos 2200+, it is recommended to use the Q4_0 quantization for optimized performance.

Remote Mode
Remote Mode allows you to connect to a few common APIs from both commercial and open source projects.

Open Source Backends:
koboldcpp
text-generation-webui
Ollama

Dedicated API:
OpenAI
Claude (with ability to use a proxy)
Cohere
Open Router
Mancer
AI Horde

Generic backends:
Generic Text Completions
Generic Chat Completions
These should be compliant with any Text Completion/Chat Completion backends such as Groq or Infermatic.

Custom APIs:
Is your API provider missing? ChatterUI allows you to define APIs using its template system.
Read more about it here!

#ai #Android
6💩6👍21