This media is not supported in your browser
VIEW IN TELEGRAM
What the Brain Sees
How a text-to-image model generates images from brain scans
https://www.deeplearning.ai/the-batch/how-a-text-to-image-model-generates-images-from-brain-scans/
https://news.1rj.ru/str/DataScienceT
How a text-to-image model generates images from brain scans
https://www.deeplearning.ai/the-batch/how-a-text-to-image-model-generates-images-from-brain-scans/
https://news.1rj.ru/str/DataScienceT
❤1❤🔥1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
The source code for DragGAN has been released! 🔥🔥🔥
We can finally play with that marvel!
🔗 GitHub repository: https://github.com/XingangPan/DragGAN
https://news.1rj.ru/str/DataScienceT
We can finally play with that marvel!
🔗 GitHub repository: https://github.com/XingangPan/DragGAN
https://news.1rj.ru/str/DataScienceT
❤🔥4👍1
📕 Constrained-Text-Generation-Studio
AI writing assistant for recreational linguists, poets, creative writers, and/or researchers to use and study the ability of large-scale language models.
🖥 Github: https://github.com/hellisotherpeople/constrained-text-generation-studio
📕 Paper: https://arxiv.org/abs/2306.15926v1
🔗Dataset: https://huggingface.co/datasets/Hellisotherpeople/Lipogram-e
https://news.1rj.ru/str/DataScienceT
AI writing assistant for recreational linguists, poets, creative writers, and/or researchers to use and study the ability of large-scale language models.
🖥 Github: https://github.com/hellisotherpeople/constrained-text-generation-studio
📕 Paper: https://arxiv.org/abs/2306.15926v1
🔗Dataset: https://huggingface.co/datasets/Hellisotherpeople/Lipogram-e
https://news.1rj.ru/str/DataScienceT
👍3
CellViT: Vision Transformers for Precise Cell Segmentation and Classification
🖥 Github: https://github.com/tio-ikim/cellvit
⏩ Paper: https://arxiv.org/pdf/2306.15350v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/pannuke
https://news.1rj.ru/str/DataScienceT
🖥 Github: https://github.com/tio-ikim/cellvit
⏩ Paper: https://arxiv.org/pdf/2306.15350v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/pannuke
https://news.1rj.ru/str/DataScienceT
❤🔥4👍3
A special and important channel to download the most important books to learn programming and data science
t.me/DataScienceM
t.me/DataScienceM
Telegram
Machine Learning
Machine learning insights, practical tutorials, and clear explanations for beginners and aspiring data scientists. Follow the channel for models, algorithms, coding guides, and real-world ML applications.
Admin: @HusseinSheikho || @Hussein_Sheikho
Admin: @HusseinSheikho || @Hussein_Sheikho
👍2❤🔥1
💬 GLIGEN: Open-Set Grounded Text-to-Image Generation
GLIGEN’s zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin. Code comming soon.
⭐️ Project: https://gligen.github.io/
⭐️ Demo: https://aka.ms/gligen
✅️ Paper: https://arxiv.org/abs/2301.07093
🖥 Github: https://github.com/gligen/GLIGEN
https://news.1rj.ru/str/DataScienceT
GLIGEN’s zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin. Code comming soon.
⭐️ Project: https://gligen.github.io/
⭐️ Demo: https://aka.ms/gligen
✅️ Paper: https://arxiv.org/abs/2301.07093
🖥 Github: https://github.com/gligen/GLIGEN
https://news.1rj.ru/str/DataScienceT
👍2❤🔥1🏆1
🧍♂ BEDLAM: Bodies Exhibiting Detailed Lifelike Animated Motion
BEDLAM is useful for a variety of tasks and all images, ground truth bodies, 3D clothing, support code, and more are available for research purposes.
🖥 Github: https://github.com/pixelite1201/BEDLAM
📕 Paper: https://bedlam.is.tuebingen.mpg.de/media/upload/BEDLAM_CVPR2023.pdf
🔗Render code: https://github.com/PerceivingSystems/bedlam_render
🎞 Video: https://youtu.be/OBttHFwdtfI
👑 Dataset: https://paperswithcode.com/dataset/bedlam
https://news.1rj.ru/str/DataScienceT
BEDLAM is useful for a variety of tasks and all images, ground truth bodies, 3D clothing, support code, and more are available for research purposes.
🖥 Github: https://github.com/pixelite1201/BEDLAM
📕 Paper: https://bedlam.is.tuebingen.mpg.de/media/upload/BEDLAM_CVPR2023.pdf
🔗Render code: https://github.com/PerceivingSystems/bedlam_render
🎞 Video: https://youtu.be/OBttHFwdtfI
👑 Dataset: https://paperswithcode.com/dataset/bedlam
https://news.1rj.ru/str/DataScienceT
❤1❤🔥1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
⭐️ ManimML: Communicating Machine Learning Architectures with Animation
An open-source Python library for easily generating animations of ML algorithms directly from code.
🖥 Github: https://github.com/helblazer811/manimml
📕 Paper: https://arxiv.org/abs/2306.17108v1
📌 Project: https://www.manim.community/
https://news.1rj.ru/str/DataScienceT
An open-source Python library for easily generating animations of ML algorithms directly from code.
from manim_ml.neural_network import NeuralNetwork, Convolutional2DLayer, FeedForwardLayer
# Make nn
nn = NeuralNetwork([
Convolutional2DLayer(1, 7, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32, activation_function="ReLU"),
FeedForwardLayer(3, activation_function="Sigmoid"),
],
layer_spacing=0.25,
)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(forward_pass)🖥 Github: https://github.com/helblazer811/manimml
📕 Paper: https://arxiv.org/abs/2306.17108v1
📌 Project: https://www.manim.community/
https://news.1rj.ru/str/DataScienceT
❤🔥3👍1
🧬NeuralFuse
🖥 Github: https://github.com/ibm/neuralfuse
⏩ Paper: https://arxiv.org/pdf/2306.16869v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/imagenet
https://news.1rj.ru/str/DataScienceT
🖥 Github: https://github.com/ibm/neuralfuse
⏩ Paper: https://arxiv.org/pdf/2306.16869v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/imagenet
https://news.1rj.ru/str/DataScienceT
👍2
🖥 10 Advanced Python Scripts For Everyday Programming
1. SpeedTest with Python
2. Search on Google
3. Make Web Bot
4. Fetch Song Lyrics
5. Get Exif Data of Photos
6. OCR Text from Image
7. Convert Photo into Cartonize
8. Empty Recycle Bin
9. Python Image Enhancement
10. Get Window Version
https://news.1rj.ru/str/DataScienceT
1. SpeedTest with Python
# pip install pyspeedtest
# pip install speedtest
# pip install speedtest-cli
#method 1
import speedtest
speedTest = speedtest.Speedtest()
print(speedTest.get_best_server())
#Check download speed
print(speedTest.download())
#Check upload speed
print(speedTest.upload())
# Method 2
import pyspeedtest
st = pyspeedtest.SpeedTest()
st.ping()
st.download()
st.upload()
2. Search on Google
# pip install google
from googlesearch import search
query = "Medium.com"
for url in search(query):
print(url)3. Make Web Bot
# pip install selenium
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
bot = webdriver.Chrome("chromedriver.exe")
bot.get('[http://www.google.com'](http://www.google.com'))
search = bot.find_element_by_name('q')
search.send_keys("@codedev101")
search.send_keys(Keys.RETURN)
time.sleep(5)
bot.quit()4. Fetch Song Lyrics
# pip install lyricsgenius
import lyricsgenius
api_key = "xxxxxxxxxxxxxxxxxxxxx"
genius = lyricsgenius.Genius(api_key)
artist = genius.search_artist("Pop Smoke", max_songs=5,sort="noscript")
song = artist.song("100k On a Coupe")
print(song.lyrics)5. Get Exif Data of Photos
# Get Exif of Photo
# Method 1
# pip install pillow
import PIL.Image
import PIL.ExifTags
img = PIL.Image.open("Img.jpg")
exif_data =
{
PIL.ExifTags.TAGS[i]: j
for i, j in img._getexif().items()
if i in PIL.ExifTags.TAGS
}
print(exif_data)
# Method 2
# pip install ExifRead
import exifread
filename = open(path_name, 'rb')
tags = exifread.process_file(filename)
print(tags)6. OCR Text from Image
# pip install pytesseract
import pytesseract
from PIL import Image
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
t=Image.open("img.png")
text = pytesseract.image_to_string(t, config='')
print(text)7. Convert Photo into Cartonize
# pip install opencv-python
import cv2
img = cv2.imread('img.jpg')
grayimg = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
grayimg = cv2.medianBlur(grayimg, 5)
edges = cv2.Laplacian(grayimg , cv2.CV_8U, ksize=5)
r,mask =cv2.threshold(edges,100,255,cv2.THRESH_BINARY_INV)
img2 = cv2.bitwise_and(img, img, mask=mask)
img2 = cv2.medianBlur(img2, 5)
cv2.imwrite("cartooned.jpg", mask)8. Empty Recycle Bin
# pip install winshell
import winshell
try:
winshell.recycle_bin().empty(confirm=False, /show_progress=False, sound=True)
print("Recycle bin is emptied Now")
except:
print("Recycle bin already empty")9. Python Image Enhancement
# pip install pillow
from PIL import Image,ImageFilter
from PIL import ImageEnhance
im = Image.open('img.jpg')
# Choose your filter
# add Hastag at start if you don't want to any filter below
en = ImageEnhance.Color(im)
en = ImageEnhance.Contrast(im)
en = ImageEnhance.Brightness(im)
en = ImageEnhance.Sharpness(im)
# result
en.enhance(1.5).show("enhanced")10. Get Window Version
# Window Version
import wmi
data = wmi.WMI()
for os_name in data.Win32_OperatingSystem():
print(os_name.Caption) # Microsoft Windows 11 Homehttps://news.1rj.ru/str/DataScienceT
❤13👍7
📶 Extract Saved WiFi Passwords in Python
https://news.1rj.ru/str/DataScienceT
import subprocess
import os
import re
from collections import namedtuple
import configparser
def get_linux_saved_wifi_passwords(verbose=1):
network_connections_path = "/etc/NetworkManager/system-connections/"
fields = ["ssid", "auth-alg", "key-mgmt", "psk"]
Profile = namedtuple("Profile", [f.replace("-", "_") for f in fields])
profiles = []
for file in os.listdir(network_connections_path):
data = { k.replace("-", "_"): None for k in fields }
config = configparser.ConfigParser()
config.read(os.path.join(network_connections_path, file))
for _, section in config.items():
for k, v in section.items():
if k in fields:
data[k.replace("-", "_")] = v
profile = Profile(**data)
if verbose >= 1:
print_linux_profile(profile)
profiles.append(profile)
return profiles
def print_linux_profiles(verbose):
"""Prints all extracted SSIDs along with Key (PSK) on Linux"""
print("SSID AUTH KEY-MGMT PSK")
print("-"*50)
get_linux_saved_wifi_passwords(verbose)https://news.1rj.ru/str/DataScienceT
❤5👍3
🖥 5 useful Python automation noscripts
1. Download Youtube videos
2. Automate WhatsApp messages
3. Google search with Python
4. Download Instagram posts
5. Extract audio from video files
https://news.1rj.ru/str/DataScienceT
1. Download Youtube videos
pip install pytubefrom pytube import YouTube
# Specify the URL of the YouTube video
video_url = "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
# Create a YouTube object
yt = YouTube(video_url)
# Select the highest resolution stream
stream = yt.streams.get_highest_resolution()
# Define the output path for the downloaded video
output_path = "path/to/output/directory/"
# Download the video
stream.download(output_path)
print("Video downloaded successfully!")2. Automate WhatsApp messages
pip install pywhatkitimport pywhatkit
# Set the target phone number (with country code) and the message
phone_number = "+1234567890"
message = "Hello, this is an automated WhatsApp message!"
# Schedule the message to be sent at a specific time (24-hour format)
hour = 13
minute = 30
# Send the scheduled message
pywhatkit.sendwhatmsg(phone_number, message, hour, minute)
3. Google search with Python
pip install googlesearch-pythonfrom googlesearch import search
# Define the query you want to search
query = "Python programming"
# Specify the number of search results you want to retrieve
num_results = 5
# Perform the search and retrieve the results
search_results = search(query, num_results=num_results, lang='en')
# Print the search results
for result in search_results:
print(result)
4. Download Instagram posts
pip install instaloaderimport instaloader
# Create an instance of Instaloader
loader = instaloader.Instaloader()
# Define the target Instagram profile
target_profile = "instagram"
# Download posts from the profile
loader.download_profile(target_profile, profile_pic=False, fast_update=True)
print("Posts downloaded successfully!")5. Extract audio from video files
pip install moviepy
from moviepy.editor import VideoFileClip
# Define the path to the video file
video_path = "path/to/video/file.mp4"
# Create a VideoFileClip object
video_clip = VideoFileClip(video_path)
# Extract the audio from the video
audio_clip = video_clip.audio
# Define the output audio file path
output_audio_path = "path/to/output/audio/file.mp3"
# Write the audio to the output file
audio_clip.write_audiofile(output_audio_path)
# Close the clips
video_clip.close()
audio_clip.close()
print("Audio extracted successfully!")https://news.1rj.ru/str/DataScienceT
❤🔥6👍5❤2
🚀 NAUTILUS: boosting Bayesian importance nested sampling with deep learning
A novel approach to boost the efficiency of the importance nested sampling (INS) technique for Bayesian posterior and evidence estimation using deep learning.
Install:
🖥 Github: https://github.com/johannesulf/nautilus
⭐️ Docs: https://nautilus-sampler.readthedocs.io/
📕 Paper: https://arxiv.org/abs/2306.16923v1
https://news.1rj.ru/str/DataScienceT
A novel approach to boost the efficiency of the importance nested sampling (INS) technique for Bayesian posterior and evidence estimation using deep learning.
Install:
pip install nautilus-samplerimport corner
import numpy as np
from nautilus import Prior, Sampler
from scipy.stats import multivariate_normal
prior = Prior()
for key in 'abc':
prior.add_parameter(key)
def likelihood(param_dict):
x = [param_dict[key] for key in 'abc']
return multivariate_normal.logpdf(x, mean=[0.4, 0.5, 0.6], cov=0.01)
sampler = Sampler(prior, likelihood)
sampler.run(verbose=True)
points, log_w, log_l = sampler.posterior()
corner.corner(points, weights=np.exp(log_w), labels='abc')
🖥 Github: https://github.com/johannesulf/nautilus
⭐️ Docs: https://nautilus-sampler.readthedocs.io/
📕 Paper: https://arxiv.org/abs/2306.16923v1
https://news.1rj.ru/str/DataScienceT
❤6
🏌️ GlOttal-flow LPC Filter (GOLF)
A DDSP-based neural vocoder.
🖥 Github: https://github.com/yoyololicon/golf
📕 Paper: https://arxiv.org/abs/2306.17252v1
🔗Demo: https://yoyololicon.github.io/golf-demo/
https://news.1rj.ru/str/DataScienceT
A DDSP-based neural vocoder.
🖥 Github: https://github.com/yoyololicon/golf
📕 Paper: https://arxiv.org/abs/2306.17252v1
🔗Demo: https://yoyololicon.github.io/golf-demo/
https://news.1rj.ru/str/DataScienceT
❤🔥3❤1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🔮 SAM-PT: Segment Anything + Tracking 🔮
⭐️ SAM-PT is the first method to utilize sparse point propagation for Video Object Segmentation (VOS).
🌐 Review https://t.ly/QLMG
🌐 Paper arxiv.org/pdf/2307.01197.pdf
🌐 Project www.vis.xyz/pub/sam-pt/
🌐 Code github.com/SysCV/sam-pt
https://news.1rj.ru/str/DataScienceT
⭐️ SAM-PT is the first method to utilize sparse point propagation for Video Object Segmentation (VOS).
🌐 Review https://t.ly/QLMG
🌐 Paper arxiv.org/pdf/2307.01197.pdf
🌐 Project www.vis.xyz/pub/sam-pt/
🌐 Code github.com/SysCV/sam-pt
https://news.1rj.ru/str/DataScienceT
❤🔥1❤1👍1
🍸The Drunkard’s Odometry: Estimating Camera Motion in Deforming Scenes
🖥 Github: https://github.com/UZ-SLAMLab/DrunkardsOdometry
⏩ Paper: https://arxiv.org/pdf/2306.16917v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/drunkard-s-dataset
https://news.1rj.ru/str/DataScienceT
🖥 Github: https://github.com/UZ-SLAMLab/DrunkardsOdometry
⏩ Paper: https://arxiv.org/pdf/2306.16917v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/drunkard-s-dataset
https://news.1rj.ru/str/DataScienceT
❤🔥2
This media is not supported in your browser
VIEW IN TELEGRAM
🪄 Making a web app generator with open ML models
🖥 Github: https://github.com/huggingface/blog/blob/main/text-to-webapp.md
📕 HuggingFace: https://huggingface.co/blog/text-to-webapp
🔗Demo: https://huggingface.co/spaces/jbilcke-hf/webapp-factory-wizardcoder
https://news.1rj.ru/str/DataScienceT
🖥 Github: https://github.com/huggingface/blog/blob/main/text-to-webapp.md
📕 HuggingFace: https://huggingface.co/blog/text-to-webapp
🔗Demo: https://huggingface.co/spaces/jbilcke-hf/webapp-factory-wizardcoder
https://news.1rj.ru/str/DataScienceT
❤🔥3👍2
🤳Filtered-Guided Diffusion
🖥 Github: https://github.com/jaclyngu/filteredguideddiffusion
⏩ Paper: https://arxiv.org/pdf/2306.17141v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/afhq
https://news.1rj.ru/str/DataScienceT
🖥 Github: https://github.com/jaclyngu/filteredguideddiffusion
⏩ Paper: https://arxiv.org/pdf/2306.17141v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/afhq
https://news.1rj.ru/str/DataScienceT
❤🔥1❤1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🪩 DISCO: Human Dance Generation
⭐️ NTU (+ #Microsoft) unveils DISCO: a big step towards the Human Dance Generation.
🌐 Review https://t.ly/cNGX
🌐 Paper arxiv.org/pdf/2307.00040.pdf
🌐Project: disco-dance.github.io/
🌐 Code github.com/Wangt-CN/DisCo
https://news.1rj.ru/str/DataScienceT
⭐️ NTU (+ #Microsoft) unveils DISCO: a big step towards the Human Dance Generation.
🌐 Review https://t.ly/cNGX
🌐 Paper arxiv.org/pdf/2307.00040.pdf
🌐Project: disco-dance.github.io/
🌐 Code github.com/Wangt-CN/DisCo
https://news.1rj.ru/str/DataScienceT
👍3❤1