In this article, Microsoft introduced VALL-E 2, the latest advancement in language models , which marks a major milestone in text-to-speech (TTS) synthesis, reaching the human level for the first time.
Experiments with LibriSpeech and VCTK datasets have shown that VALL-E 2 outperforms all previous models in terms of generated speech quality and naturalness.
https://news.1rj.ru/str/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
👍4❤3
This media is not supported in your browser
VIEW IN TELEGRAM
Unlike Sora or KLING, it is available for testing.
You can try it here: https://lumalabs.ai/dream-machine
https://news.1rj.ru/str/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
👍4❤1
This media is not supported in your browser
VIEW IN TELEGRAM
-
brew install gabotechs/taps/musicgptMusicGPT allows you to run the latest music generation models locally on any platform, without installing heavy dependencies such as ML frameworks.
Currently, MusicGPT only supports MusicGen from Meta, but there are plans to add even more different music generation models.
Quick start with Docker:
docker run -it --gpus all -p 8642:8642 -v ~/.musicgpt:/root/.local/share/musicgpt gabotechs/musicgpt --gpu --ui-exposeor using cargo:
cargo install musicgpthttps://news.1rj.ru/str/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
👍1
EU automatic quantitative system automatically searches for the lowest selling price of digital currencies such as BTC, ETH, USDT on the exchange, and buys in seconds.
1. Register and get 9999 USDT, with a maximum rebate of 10% deposit.
2. USDT quantification, funds are automatically credited and withdrawn.
3. Quantification VIP1-VIP8, quantitative income 3%-13%
4. Support multiple currencies, regular quantification 1.5%-6.0%
5. Quantification is reset once every 24 hours, and quantification is once a day.
6. Promote three-level quantitative trading agents (A5%+B2%+C1%=8% reward)
24-hour uninterrupted data collection, no need for manual observation, efficient and stable profit. This is the EU automatic quantitative system.
Telegram customer service: https://news.1rj.ru/str/eu_online_service1
Member registration link: https://euexchange.cc/#/register?i=135623
EU official channel: https://news.1rj.ru/str/EUExchangevip
EU official group: https://news.1rj.ru/str/EUExchangeVipGroup
1. Register and get 9999 USDT, with a maximum rebate of 10% deposit.
2. USDT quantification, funds are automatically credited and withdrawn.
3. Quantification VIP1-VIP8, quantitative income 3%-13%
4. Support multiple currencies, regular quantification 1.5%-6.0%
5. Quantification is reset once every 24 hours, and quantification is once a day.
6. Promote three-level quantitative trading agents (A5%+B2%+C1%=8% reward)
24-hour uninterrupted data collection, no need for manual observation, efficient and stable profit. This is the EU automatic quantitative system.
Telegram customer service: https://news.1rj.ru/str/eu_online_service1
Member registration link: https://euexchange.cc/#/register?i=135623
EU official channel: https://news.1rj.ru/str/EUExchangevip
EU official group: https://news.1rj.ru/str/EUExchangeVipGroup
👍1
Forwarded from 🐳
How to create passive income on Telegram?
You can make it with @Whale!🥰
The best part is that you can invite as many friends as you want and make tons of money while they play🎲
What does your income consist of and how does it work?
🌟 You receive 10% of Whale's earnings from each direct referral.
🌟 1% for each 2nd level referral.
🌟 Monthly paid earnings in $TON.
The more friends you invite, the more chances you have to hit the big jackpot — get a share of the @whale jackpot when someone wins it!Sometimes it happens 👍
Referrals are counted when:
✅ Your friends follow your referral link.
✅ Their wallets and Telegram accounts were not previously members of the Whale system.
✅ They link their Telegram account to the bot.
✅ They participate in some Whale games.
How to invite friends?
Get a unique invitation link by clicking “Earn” in the application itself or in the bot, and share this link with your friends!🐳
You can make it with @Whale!
The best part is that you can invite as many friends as you want and make tons of money while they play
What does your income consist of and how does it work?
The more friends you invite, the more chances you have to hit the big jackpot — get a share of the @whale jackpot when someone wins it!
Referrals are counted when:
How to invite friends?
Get a unique invitation link by clicking “Earn” in the application itself or in the bot, and share this link with your friends!
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3❤2❤🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
VideoLLaMA 2, a logical evolution of past models, includes a specialized space-time convolution (STC) component that effectively captures complex dynamics in video.
https://news.1rj.ru/str/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
👍9❤2❤🔥1👎1🏆1
Forwarded from Advert Deal
🚀 90% of people fail in #crypto because they pick the WRONG altcoins!
Be among those who know what to do! Harry spends countless hours researching altcoins that you SHOULD buy right now during this market dip🔥
💸Discover altcoins with high growth potential! He shares all the information on his channel for free ⬇️ Subscribe now:
https://news.1rj.ru/str/+-ewyDmzZceEwYzk0
https://news.1rj.ru/str/+-ewyDmzZceEwYzk0
Be among those who know what to do! Harry spends countless hours researching altcoins that you SHOULD buy right now during this market dip🔥
💸Discover altcoins with high growth potential! He shares all the information on his channel for free ⬇️ Subscribe now:
https://news.1rj.ru/str/+-ewyDmzZceEwYzk0
https://news.1rj.ru/str/+-ewyDmzZceEwYzk0
❤🔥4👍3❤1
This media is not supported in your browser
VIEW IN TELEGRAM
StreamSpeech is a seamless All-in-One model for offline and synchronous speech recognition, speech translation and speech synthesis.
https://news.1rj.ru/str/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
> > > Outperforms GPT4-Turbo, Claude3-Opus, Gemini-1.5Pro, Codestral in coding and math problems.
> Supports 338 programming languages, 128K context length.
> Fully open source code in two sizes: 230B and 16 B
DeepSeek-Coder-V2 outperforms Yi-large, Claude3-Opus, GL M4 and Qwen2-72B in the Arena-Hard-Auto table.
▪️HF: https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct
▪️Github: https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/paper.pdf
▪️Demo: https://chat.deepseek.com/sign_in?from=coder
https://news.1rj.ru/str/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
👍4
TSI-Bench: Benchmarking Time Series Imputation
🖥 Github: https://github.com/WenjieDu/Awesome_Imputation
📕 Paper: https://arxiv.org/pdf/2406.12747v1.pdf
💙 Dataset: https://github.com/WenjieDu/TSDB
https://news.1rj.ru/str/DataScienceT✅
https://news.1rj.ru/str/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
¡Hola! 👋
AmigoChat - AI GPT bot. Best friend and assistant:
✅ use GPT 4 Omni
✅ generate images
✅ get ideas and hashtags for social media
✅ write SEO texts
✅ rewrite and summarize longreads
✅ choose a promotion plan
✅ chat and ask questions
Everything is FREE because amigos don't take dineros for help!🤠
👉 https://news.1rj.ru/str/Amigoo_Chat_Bot
AmigoChat - AI GPT bot. Best friend and assistant:
Everything is FREE because amigos don't take dineros for help!
Please open Telegram to view this post
VIEW IN TELEGRAM
Telegram
Amigo: free ChatGPT, Flux, Music
Personal AI Assistant with GPT-4o, GPT-o1, Suno, Flux 1.1 pro to help you with text, images and music
❤2👍2
🧠 Magnum-72B-v1 is an LLM that can write prose and poetry
Magnum-72B-v1 is based on the Qwen-2 72B.
It was trained on 55 million tokens of high-quality data. Eight AMD Instinct MI300X AMD gas pedals were used to fine tune all model parameters.
🤗 Hugging Face
https://news.1rj.ru/str/DataScienceT✅
Magnum-72B-v1 is based on the Qwen-2 72B.
It was trained on 55 million tokens of high-quality data. Eight AMD Instinct MI300X AMD gas pedals were used to fine tune all model parameters.
🤗 Hugging Face
https://news.1rj.ru/str/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
Toucan is a text-to-speech (TTS) model + a set of tools for learning, training and deploying the model.
The model was created at the Institute for Natural Language Processing (IMS) at the University of Stuttgart.
Everything is written in idiomatic Python using PyTorch to make learning and testing as easy as possible.
https://news.1rj.ru/str/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
👍1🏆1
WebScraping with Gen AI
During this session, we'll explore the following topics:
1️⃣ Basics of Web Scraping:
Understand the fundamental concepts and techniques of web scraping and its legal and ethical considerations.
2️⃣ Scraping with Gen AI:
Discover how Gen AI revolutionizes the web scraping landscape with real-world examples.
3️⃣ Jina Reader API:
Get acquainted with the Jina Reader API, a powerful tool for obtaining LLM-friendly input from URLs or web searches.
4️⃣ ScrapeGraphAI:
Dive into ScrapeGraphAI, a groundbreaking Python library that combines LLMs and direct graph logic for creating robust scraping pipelines.
Event Details:
🗓 Date: 22 June, Saturday
⏰ Time: 11:00 AM IST
🔗 Register now: https://www.buildfastwithai.com/events/web-scraping-with-gen-ai
Connect with Founder from IIT Delhi;
https://www.linkedin.com/in/satvik-paramkusham/
During this session, we'll explore the following topics:
Understand the fundamental concepts and techniques of web scraping and its legal and ethical considerations.
Discover how Gen AI revolutionizes the web scraping landscape with real-world examples.
Get acquainted with the Jina Reader API, a powerful tool for obtaining LLM-friendly input from URLs or web searches.
Dive into ScrapeGraphAI, a groundbreaking Python library that combines LLMs and direct graph logic for creating robust scraping pipelines.
Event Details:
Connect with Founder from IIT Delhi;
https://www.linkedin.com/in/satvik-paramkusham/
Please open Telegram to view this post
VIEW IN TELEGRAM
👍6❤🔥1🏆1
Let's learn Linear Regression in detail
Linear regression is a statistical method used to model the relationship between a dependent variable (target) and one or more independent variables (features). The goal is to find the linear equation that best predicts the target variable from the feature variables.
The equation of a simple linear regression model is:
\[ y = \beta_0 + \beta_1 x \]
Where:
- \( y) is the predicted value.
- \( \beta_0) is the y-intercept.
- \( \beta_1) is the slope of the line (coefficient).
- \( x) is the independent variable.
Let's consider an example using Python and its libraries.
Suppose we have a dataset with house prices and their corresponding size (in square feet).
# Import necessary libraries
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
import matplotlib.pyplot as plt
# Example data
data = {
'Size': [1500, 1600, 1700, 1800, 1900, 2000, 2100, 2200, 2300, 2400],
'Price': [300000, 320000, 340000, 360000, 380000, 400000, 420000, 440000, 460000, 480000]
}
df = pd.DataFrame(data)
# Independent variable (feature) and dependent variable (target)
X = df[['Size']]
y = df['Price']
# Splitting the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
# Creating and training the linear regression model
model = LinearRegression()
model.fit(X_train, y_train)
# Making predictions
y_pred = model.predict(X_test)
# Evaluating the model
mse = mean_squared_error(y_test, y_pred)
r2 = r2_score(y_test, y_pred)
print(f"Mean Squared Error: {mse}")
print(f"R-squared: {r2}")
# Plotting the results
plt.scatter(X, y, color='blue') # Original data points
plt.plot(X_test, y_pred, color='red', linewidth=2) # Regression line
plt.xlabel('Size (sq ft)')
plt.ylabel('Price ($)')
plt.noscript('Linear Regression: House Prices vs Size')
plt.show()
1. Libraries: We import necessary libraries like
numpy, pandas, sklearn, and matplotlib.2. Data Preparation: We create a DataFrame containing the size and price of houses.
3. Feature and Target: We separate the feature (Size) and the target (Price).
4. Train-Test Split: We split the data into training and testing sets.
5. Model Training: We create a
LinearRegression model and train it using the training data.6. Predictions: We use the trained model to predict house prices for the test set.
7. Evaluation: We evaluate the model using Mean Squared Error (MSE) and R-squared (R²) metrics.
8. Visualization: We plot the original data points and the regression line to visualize the model's performance.
- Mean Squared Error (MSE): Measures the average squared difference between the actual and predicted values. Lower values indicate better performance.
- R-squared (R²): Represents the proportion of the variance in the dependent variable that is predictable from the independent variable(s). Values closer to 1 indicate a better fit.
https://news.1rj.ru/str/addlist/8_rRW2scgfRhOTc0
Please open Telegram to view this post
VIEW IN TELEGRAM
👍8🏆2
Forwarded from Eng. Hussein Sheikho
Delegate routine tasks to Artificial Intelligence!
The new assistant for macOS always knows what needs to be done!
💻 The new AI-powered assistant AIDE provides you with support based on your screen content. Get relevant hints and solutions to work more efficiently and productively without losing focus on your tasks.
Download and start for free now — AIDE AI
The new assistant for macOS always knows what needs to be done!
💻 The new AI-powered assistant AIDE provides you with support based on your screen content. Get relevant hints and solutions to work more efficiently and productively without losing focus on your tasks.
Download and start for free now — AIDE AI
👍2😍1🏆1
Modded-NanoGPT is a modification of the GPT-2 training code from Andrei Karpathy.
Modded-NanoGPT allows:
- train 2 times more efficiently (requires only 5B tokens instead of 10B to achieve the same accuracy)
- has simpler code (446 lines instead of 858)
https://news.1rj.ru/str/addlist/8_rRW2scgfRhOTc0
Please open Telegram to view this post
VIEW IN TELEGRAM
👍1
When pre-trained from scratch, a 500M model trained on 100B tokens achieves the performance of a 1B model pre-trained on 300B tokens.
Available:
https://news.1rj.ru/str/addlist/8_rRW2scgfRhOTc0
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
👍6