Good day, dear community! As you know, 15% of the total MAI token supply was planned to be used for CEX listings. 5% (one-third) has already been unlocked, but we will not use them for CEX listings. This decision was made by the entire team in collaboration with experienced consultants.
Why?
Let's look at the statistics from listing agents who honestly shared this information with us.
60% of exchanges that require listing fees do not bring trading volume and operate on a "Pay-to-announce" basis. We will not get anything from them except announcements and expectations within the community.
Another 20% of exchanges create fake trading volume within their liquidity pools, which cannot boost real volume on DEX, which is what we can benefit from.
Additionally, we track all listings of new projects on Tier-1 exchanges, and even in those cases, we regularly see very low trading volume.
What do we propose?
We will not list on useless CEX and will revisit this matter at the end of July, before the next MAI unlock. Based on the fact that the project has 5 million free MAI tokens, we have decided to use them as follows:
Detailed announcements for each event will be made shortly. As you may have noticed, we have started increasing traffic and strengthening our marketing step by step. We do not publicize the majority of our marketing efforts in order to maintain the quality of our advertising sources. Development work is also in full swing, and today we will release a new model.
Thank you for staying with us and supporting the project.
Please open Telegram to view this post
VIEW IN TELEGRAM
👏22🔥13👍6⚡4❤4🤔2🌚2❤🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
MusicGen generates new music based on text prompts and audio files, allowing for much more efficient creation of high-quality samples. MusicGen's developers conducted a comprehensive empirical study to demonstrate the superiority of the proposed method compared to existing methods using standard text-to-music benchmarks.
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥19🏆7❤5👍5✍1❤🔥1
We want to incentivize you to reach a new ATH (All-Time High), so for exactly 48 hours, we are launching third competition for the biggest one-time purchase of the $MAI token.
1st place: 2.5 BNB
2nd place: 1.5 BNB
3rd place: 1 BNB
Please open Telegram to view this post
VIEW IN TELEGRAM
👍18🔥10🍌6❤3⚡1😁1🤩1🏆1🤝1🫡1🦄1
The first full month since the platform launch has passed!
Unique users: 17,783
Runs: 48,495
Models: 28
1. The following models were released: Musicgen, Rembg, Kandinsky-2, Dreamshaper-v6, Babes-v2.0-img2img, Absolutebeauty v1.0, Vicuna-13b.
2. The first community airdrop has been completed.
3. A new version of the dApp was released.
4. Over $14,000 was spent on marketing.
1. Opening of the staking pool.
2. Second community airdrop.
3. Whitepaper v2.
4. CG Listing.
5. Release at least 15 models.
6. AMA sessions in different communities.
7. Search for AI model developers and API access.
8. Multi AI's presentation at a offline crypto forum.
9. First major partnership.
Today, the team worked together to develop a detailed plan for each day until the end of July! We thank each and every one of you and hope that you believe in the success of the platform as much as we do! Stay with us and stay tuned for updates! Rest assured, there is still a lot of exciting things ahead of us!
Please open Telegram to view this post
VIEW IN TELEGRAM
👏27🔥10🍾7❤4🫡3👍2
This media is not supported in your browser
VIEW IN TELEGRAM
The IFAN team proposed a novel end-to-end learning-based approach for single image defocus deblurring. The proposed approach is equipped with a novel Iterative Filter Adaptive Network (IF) that is specifically designed to handle spatially-varying and large defocus blur.
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
❤18👍7🔥6⚡4👏2🤩2
This media is not supported in your browser
VIEW IN TELEGRAM
Developers of the instruct-pix2pix model offer a completely new method of image editing based on a written instruction provided by a person, which tells the model what to do and the input image. The instruct-pix2pix model is trained on a language model (GPT-3) and a text-to-image conversion model (Stable Diffusion)
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
👍13🔥6⚡5👾2❤1
We are delighted to announce that our main speaker (Tim), who is loved by many, is back in action, and on Monday, we will be hosting an AMA session to discuss past events and the team's plans.
If you have been wanting to interact with the team in real-time, now is the perfect time to start preparing your questions!
Please open Telegram to view this post
VIEW IN TELEGRAM
👍26🔥11👏6🎉3🍾2🤝2❤1🥰1
This media is not supported in your browser
VIEW IN TELEGRAM
Blip-2 (Based on the Language Model LLM) is a model that answers questions about images. To use it, provide an image, and then ask a question about that image.
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥17👍6❤3👏3
Right now, we are expanding our presence on Twitter using the main Multi AI account. In the past 2 days, we have added around 40 new quests to Zealy under the #twitter section, and it is already yielding results.
We would like to remind you that the top 100 most active participants in Zealy will receive 10,000 MAI each by August 1st.
If you are not participating in the contest but would like to help with this raid, please support our replies in Twitter (Like + RT + Reply).
Every participant is important! It's time for our strong community to show themselves!
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥26👍4👏4⚡2❤1🆒1
This media is not supported in your browser
VIEW IN TELEGRAM
TANGO is a latent diffusion model (LDM) for text-to-audio (TTA) generation. TANGO can generate realistic audios including human sounds, animal sounds, natural and artificial sounds and sound effects from textual prompts.
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥25👍5⚡4🤩4❤2
This media is not supported in your browser
VIEW IN TELEGRAM
The model is trained on 500,000 real photographs and allows for generating photo-realistic images using text prompts. The model is based on Stable Diffusion v 2.1 & llama70b-v2-chat
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
👍16🔥14👏3⚡2❤2🤩1🐳1
This media is not supported in your browser
VIEW IN TELEGRAM
To start, draw a picture of something that you'd like to generate an image of. Then, prompt the model to generate an image as you would for Stable Diffusion. The model generating the image will use your drawing as a template to guide image generation. At the end, you should have an image that looks like your drawing.
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
❤12🔥5👍4⚡2🤩1🆒1
This media is not supported in your browser
VIEW IN TELEGRAM
Create unique and attractive product images! If the uploaded image has a transparent background, the non-transparent portion will be automatically used as the product image. Otherwise, the product image will be obtained by removing the background.
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥14🏆6👍5🆒4👀2❤1⚡1😐1
Hey, Multi AI lovers! ❤️
I'm pleased to announce that this evening, we managed to reach a major partnership agreement that will be announced in the coming weeks. We are expecting a full-fledged partnership and marketing support from DEX, whose native token is held by hundreds of thousands of wallets. Additionally, their token is traded on every well-known CEX you are familiar with. But for now, we cannot disclose specific information.
Unfortunately, this required us to slightly modify the plans for the distribution of MAI tokens, as we will need ~8.5M MAI for this partnership (not for sale, of course).
Taking into account the recently unlocked 5M MAI, we currently have 10.5M MAI at our disposal, 1M of which will be distributed among the top 100 most active participants in Zealy. Since the effectiveness of Zealy is undeniable, we have decided to use the remaining 1M to reward the most active Zealy users in August under the same conditions as in July (100 winners with 10,000 MAI each).
If you are concerned about the CEX listing: another 5M MAI will be unlocked by the end of August, which should be sufficient for listing on several CEXs, considering the potential price of MAI during that period.
As you understand, staking will have to be cancelled. However, we are confident that through the upcoming partnership, we will significantly increase the number of MAI holders, and with the potential growth in the price of MAI, each of you will be able to earn much more than through staking, which hardly brings in new holders.
But today everything has changed. It is highly likely that an hour ago, we opened the path to achieve a multi-million market cap. And we couldn't have done it without each and every one of you here. Thank you, family! We greatly appreciate your support over the past 2 months!
Sincerely yours, Bartosz Nowakowski.
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥44❤12⚡6👍6🐳3❤🔥2😁1
This media is not supported in your browser
VIEW IN TELEGRAM
Create simple animations using text prompts. The feature of this model lies in its incredible generation speed and low load on graphic processors.
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
👍15🔥10❤6👀2👏1
We are pleased to announce the results of the second community airdrop! We would like to thank everyone for their active participation and hope that you are satisfied with the reward!
List of winners: click
Proof of sending: click
Please open Telegram to view this post
VIEW IN TELEGRAM
❤15🔥8🍾6👏1🏆1
This media is not supported in your browser
VIEW IN TELEGRAM
This is a model that can be used to generate and modify images based on text prompts. It is a Latent Diffusion Model that uses two fixed, pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
👍20🔥9❤🔥5🫡3🏆1
The second month since the platform launch has passed!
Unique users: 21,754 (+22%)*
Runs: 63,398 (+31%)*
Models: 37 (+9)*
* change from last month
1. The following models were released: IFAN Defocus, Instruct-pix2pix, Blip-2, Tango, EpicRealism, Controlnet-scribble, Ad-inpaint, Animate-diff, SDXL.
2. July Biggest Buy Competition was held.
3. The second community airdrop has been completed.
4. Twitter Blue and replies to influencers' tweets.
5. The first major partnership agreement.
1. Official partnership announcement.
2. Third community airdrop.
3. CG Listing.
4. Release at least 10 models.
5. AMA sessions in biggest crypto communities.
6. Massive banner ad through A-ads.
7. SEO optimization and Google Ads for each model.
8. CEX Listings.
Every day, step by step, the project is growing in both the number of users and the number of holders. We are pleased that our community is expanding and we thank everyone for choosing Multi AI. August promises to be eventful! 😉
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥26👏8👍7🍾6❤5⚡2
As you may recall, we are using a hold-to-use model as an interim solution since the oracle and pay-per-use model have not been implemented yet. The requirements are set in MAI tokens and are not correlated with the dollar price, so we consider it a rational decision to adjust these requirements based on market cap.
In the table above, you can see the key milestones that will influence the holding requirements for using AI models.
Changes can occur in both directions and depend on what happens first. We believe that as the project grows, the requirements should be lowered to increase the potential number of users and reduce the entry barrier to an acceptable level.
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥29👍9❤4😍3❤🔥2🐳1
This media is not supported in your browser
VIEW IN TELEGRAM
This is LLama v2, the latest language model from Meta. Compared to LLaMA v1, it was trained on more data (~2 trillion tokens) and supports context windows up to 4k tokens. This endpoint uses the 70 billion parameter model, which is the largest LLama v2 model released by Meta so far.
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥28👍10🏆5❤🔥2🐳2🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
Long-awaited update of one of the most popular models, anything-v3. This model is designed to produce high-quality, highly detailed anime-style images with just a few prompts. Enjoy a whole new level of anime creation with stable-diffusion.
Share your creations in this chat: @multi_ai_experience
Please open Telegram to view this post
VIEW IN TELEGRAM
👍14🔥10❤7⚡2👏2