Codeberry Club – Telegram
Congrats 👀 🥳 🍾
👍2🍾1
March was very loud for AI. Here are some the most important news!

1. Long awaited GPT-4 by OpenAI finally is here. This model enables users (us) go beyond text prompts. It is able to use combined prompts of picture and a text prompt, like: are these ingredients makes sense for bake a cake?

2. Look at the PALM-E by Google. They rushed to release their model faster than GPT-4 from their frenemies OpenAI. This model combines prompts with robotic automation. Look at the video now you should be scared!

3. Llama (GPT rival by Meta) was leaked into Torrents by infamous PR on Github. This model initially (and still) has been released for exclusively researchers and prohibited for commercial use until Meta will be done with their hallucination validations and other fine tunings.

4. Alpaca this is another GPT rival now from Stanford University again, not suitable for commercial use. However Stanford researchers have done marvellous transfer learning of ChatGPT (v3.5) to train model in the fraction of costs.
👍1
5. Open source model BLOOM available in hugging face if you have some 16GB and few GPU cores to spare for inference you can give it a spin.

6. HELM is a go to link to on see what is going on with the models development.
👍1
Researchers from Microsoft spent over than 18mo in OpenAI.

They published a post that gives us a sneak peak on what is going on in OpenAI what do they thinking about.

#ai #openai #microsoft
👍2
HuggingGPT was announced by Huggingface.

See their paper.

Primary LLM models are for text. Hugging proposing a central brain that with multi-modal questions that includes, videos and other media.

There are over 24 tasks such as classification, object detection etc.

#HuggingGPT #GPT #LLM
👍2
We are continuing our Performance Battle!

Join us on April 19 at "Performance Battle 4: Micronaut Reactive vs Micronaut Reactive as Native"

Speaker: Vadzim Kavalkou (Chief Software Engineer in Advance Software Engineering Department at EPAM Systems).

We will talk about "a modern, JVM-based, full-stack framework for building modular, easily testable microservice and serverless applications".

Micronaut says that it's faster and less expensive due to the fact of lower resource consumption and better serialization during the build time. Trust but verify. So, Vadzim is going to verify it and compare with Quarkus and Spring.

We will be glad to see you among the participants on April 19: https://epa.ms/2jxXU9
👍1
🚢A useful article about security in containers.

You need to worry not only about how secure your code but how secure and up-to-date the container where you deploy you code as well!

https://www.darkreading.com/dr-tech/87-of-container-images-in-production-have-critical-or-high-severity-vulnerabilities

#security #containers
1👍1
While we are preparing to the next Performance Battle, you can watch the previous once.

1⃣Performance Battle 1: Spring Web vs Spring Web as Native
2⃣Performance Battle 2: Spring Reactive and Spring Reactive as Native
3⃣Performance Battle 3: Quarkus Reactive vs Quarkus Reactive as Native

On April 19 we will talk about Micronaut Reactive vs Micronaut Reactive as Native. It’s time to register if you haven’t already: https://epa.ms/2jxXU9
👍1
Pandas v2.0 now with Apach Arrows support!

This gives way better performance to already excellent pandas which makes it better stand up against rivals such as Polars.

https://pandas.pydata.org/docs/dev/whatsnew/v2.0.0.html

#python #pandas
👍2🥰1
😒
😁6
NEW EVENT!

Join us at “Exploring Mobile Solution Architecture”.

Date: May 10, 17:00 (UTC +03:00)
Speaker: Artem Bagritsevich (Chief Software Engineer II, EPAM)

What this topic is about?
We will delve into various approaches and mobile development frameworks including Capacitor, ReactNative, Native Script, Unity, PWA, and more.

By the end of the presentation, you will have a better understanding of which mobile solution architecture approach is best suited for your business needs.

Registration is free: https://epa.ms/Exploring-Mobile-Solution-Architecture
2
Have a good Friday!
😁3👍1
Real Case from Xu Hao how to use ChatGPT to:

Create meaningful architecture design and recommendations.
Share these steps and provided recommendations with the rest of the team.

https://martinfowler.com/articles/2023-chatgpt-xu-hao.html

#ChatGPT #Case #GPTcase
🔥3
Hugging Face and ServiceNow open sourced their model which is a direct competitor to GitHub CoPilot from Microsoft!

The StarCoder models are 15.5B parameter models trained on 80+ programming languages from The Stack (v1.2), with opt-out requests excluded. The model uses Multi Query Attention, a context window of 8192 tokens, and was trained using the Fill-in-the-Middle objective on 1 trillion tokens.

https://huggingface.co/bigcode/starcoder

#GitHub_CoPilot #starcoder
👍3
🧗🏻‍♂️
😁5
We enjoyed testing nginx config!)

There is a list of playgrounds for different technologies and stacks.

https://jvns.ca/blog/2023/04/17/a-list-of-programming-playgrounds/

#playgrounds
👍5
Amazon Prime saved reduced costs up to 90% by ditching Serverless and rearchitecting to Monolith.

“Moving our service to a monolith reduced our infrastructure cost by over 90%. It also increased our scaling capabilities. Today, we’re able to handle thousands of streams and we still have capacity to scale the service even further.”

https://www.primevideotech.com/video-streaming/scaling-up-the-prime-video-audio-video-monitoring-service-and-reducing-costs-by-90

#amazon #serverless #monolith
😁1
True?🌝
😢3😁2
Leaked Google documents says Large Language Models are catching up too quickly.

The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google.

https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

#Large_Language_Models