Netkas 👾 – Telegram
Netkas 👾
57 subscribers
60 photos
9 videos
1 file
19 links
Netkas's Journey
Download Telegram
My contribution to chapa cli i always want to do cli apps this might be the time

Got the login credentials from them to be honest the logic implemented just check whether you put token or not(doesn't check its credentiality) 😅 let just say its on its early stage and as i can tell they really want to work with it

if you want to work with cli apps here is repo llink a lot of thing to be improved
Github
2👾2
My last ML project (its architecture)
I think this will be someones final project if works well
👾1
This media is not supported in your browser
VIEW IN TELEGRAM
MediaPipe pose estimation can be challenging to work with multiple bodies

I have no idea why its jumping like that. i have to make sure of that before going to activity tracking
I was working on this project, and to be honest, I didn’t even know where to start. The idea came to me while I was wondering what would happen if you gave "sudo access" to LLMs.

I ended up creating a simple wrapper around several LLMs, including Claude 3.5, GPT-3, 3.5, 4, QwenCoder, and a few others—seven in total. Some of these have Claude’s flare protection, while others don’t.
👾1
This is the concept graph and i wanted to fine tune them using Lang-Chain so that they can be specific to a certain usecase it also reduce hallucinations.
This media is not supported in your browser
VIEW IN TELEGRAM
i also build a simple CLI app that work with them [Not fine tuned]
claude 3.5 at POST request

# for Claude3.5 
https://husky-glenine-nextflow-78eef280.koyeb.app/claude

{"query": "Your question"}



``json
# for Llama 405B
https://husky-glenine-nextflow-78eef280.koyeb.app/llama

{"query": "Your question"}``



# for GPT 4
https://husky-glenine-nextflow-78eef280.koyeb.app/gpt4

{"query": "Your question"}

```json
# to generate image

https://husky-glenine-nextflow-78eef280.koyeb.app/imagine

{"query": "Your question"}


json
# for GPT 4mini

https://husky-glenine-nextflow-78eef280.koyeb.app/gptmini

{"query": "Your question"}`



Status: all
👾2
Netkas 👾
claude 3.5 at POST request # for Claude3.5 https://husky-glenine-nextflow-78eef280.koyeb.app/claude {"query": "Your question"} ``json # for Llama 405B https://husky-glenine-nextflow-78eef280.koyeb.app/llama {"query": "Your question"}`` # for GPT 4…
I'm in the middle of my finals, but I had to share this—Rust APIs are unbelievably fast, like crazy fast. I can barely wrap my head around it. I was considering building all my APIs in Go since it's also fast, but it's nowhere near Rust (or even Zig).
I've decided to write all my APIs in Rust moving forward. Honestly, if Python were this fast, we might already have AGI by now. And as for Julia? Forget it!

🤯
👾1
This media is not supported in your browser
VIEW IN TELEGRAM
A lot of refactoring expected but you can install it now

Github

python3 -m venv myenv && source myenv/bin/activate && git clone https://github.com/abdimk/Slang && cd Slang && pip install dist/slang-1.0.0-py3-none-any.whl 


i couldn't include Lang-chin integration or Chromadb and some providers are blocking the request Mistral ai and Deep seek

i also found a good provider for every llama model 1B,3B,8B,70B,405B parameters

It need a lot refactoring !
Media is too big
VIEW IN TELEGRAM
A simple chatbot using slang and FastAPI

am using llama 8B all your chat history is on your local storage that hallucination is due to incorrect way of passing the chat history which is always an issue as the chat history grows unless the LLM has a good amount of context window.

Link
🏆6👍1
This is really funny i found out a k8s cluster that should have been deleted a like 8 months ago is still running i think they forgot about it and i was the last person to logout (This was the first time i even saw a how k8s work me and this intern was about to host kafka / Hadoop(HDFS) but we have no clue how he deployment work ena we f**ked it up the node can be deleted and redeployed )

we are going to talk about it !

K8s is by far the coolest thing i ever saw ! let see how far the stream goes
🐳1