Today’s AI industry faces significant challenges due to centralization, with major advancements often controlled by a few large corporations. This leads to concerns about data privacy, monopolistic practices, and limited access to cutting-edge technology. Additionally, the over-reliance on Large Language Models (LLMs) like GPT-3, despite their capabilities, brings issues such as high computational costs, environmental impact, and potential biases in the data they are trained on. These models require vast data and resources, making them accessible only to well-funded organizations.
Assisterr addresses these challenges by introducing Small Language Models (SLMs) and promoting a community-owned approach to AI development. SLMs are designed to be more efficient, requiring less computational power and data while maintaining high performance, making AI technology more accessible and sustainable. Moreover, Assisterr’s community-owned models and AI agents empower users to contribute to and benefit from AI advancements, fostering innovation and inclusivity, and ensuring that the benefits of AI are shared more broadly across society.
Source: Assisterr website
Assisterr AI is a decentralized AI platform designed to democratize access to artificial intelligence by leveraging Small Language Models (SLMs) and community-owned AI agents. Its primary purpose is to provide a more efficient, accessible, and sustainable alternative to traditional AI models, addressing the limitations of Large Language Models (LLMs) and promoting a collaborative AI ecosystem.
Large Language Models (LLMs) like GPT-3 and BERT are AI models trained on vast amounts of text data to understand and generate human-like language. They are capable of performing a wide range of tasks, from text completion to translation and summarization. However, LLMs have several notable shortcomings:
Small Language Models (SLMs), while similar in concept to LLMs, are designed to be more accurate, specialized, and efficient. By focusing on specific tasks and datasets, SLMs provide superior performance for niche applications, making them better suited for specialized use cases. Utilizing tailored datasets and focusing on specific business needs, SLMs can deliver superior performance and situational adaptability at a fraction of the cost. This is also encouraging for open-source SLM building, where cheaper projects have previously developed SLMs with competitive accuracy to veteran LLMs at much lower costs.
Small Language Models (SLMs) are at the core of Assisterr’s technology. Unlike Large Language Models (LLMs), SLMs are designed to be more efficient and specialized. They focus on specific tasks and datasets, which allows them to deliver superior performance for niche applications. This specialization makes SLMs more accessible and sustainable, as they require less computational power and data.
To address the limitations of LLM-based agents, advanced approaches have emerged involving multiple small language models (SLMs) working in collaborative agentic frameworks. Two core approaches are leveraged when developing AI agents from SLM ensembles: Mixtures of Experts (MoE) and Mixtures of Agents (MoA).
Mixtures of Experts (MoE)
Source: Assisterr Litepaper
When combined in MoE ensembles, modern SLM reasoning can achieve enhanced learning flexibility without losing its capacity for functional problem-solving. Ensemble learning can combine the reasoning skills of multiple smaller models, each specialized in different associated contexts, to solve complex problems. This generates a hybrid comprehension that continues to allow the AI to deep-dive. Layers of experts can themselves be composed of MoEs, creating hierarchical structures to buffer contextual complexity and problem-solving proficiency further. An MoE typically uses a sparse gating layer that dynamically selects among several parallel networks to give the most appropriate response to the prompt. To achieve more flexible responses, individual experts could be fine-tuned for code generation, translation, or sentiment analysis. More sophisticated MoE architectures may contain several such MoE layers in combination with other components. Like any typical language model architecture, the MoE gating layer operates on semantic tokens and requires training.
Mixtures of Agents (MoA)
When assembled into MoA architectures, SLMs enhance the selectivity of diversified reasoning ensembles, enabling AI to enact precise execution of a task with the required methodology. Agentic models are assembled in a consortium that layers execution protocols to improve the efficiency and problem-solving of complex tasks. The AI therefore works in multi-domain scenarios. Teams of agents can work in sequence, iteratively improving upon previous results. MoA has previously significantly outperformed larger models, including GPT-4 Omni’s 57.5% accuracy score on AlpacaEval 2.0, even in open-source models. A Mixture of Agents (MoA) operates on the level of model outputs, not semantic tokens. It does not feature a gating layer but forwards the text prompt to all agents in a parallelized manner. Outputs of the MoA are also not aggregated by addition and normalization. Instead, they are concatenated and combined with a synthesize-and-aggregate prompt before being passed on to a separate model to produce the final output. The models are thus divided into “proposers” that compute diverse outputs and “aggregators” that integrate the results. Just like for MoE, several of these layers can be combined. The lack of gating layers makes this approach more flexible and adaptable to complex tasks.
The DeAI (Decentralized AI) economy is a fundamental aspect of Assisterr’s platform. It leverages blockchain technology to create a decentralized marketplace for AI models and data. This economy incentivizes data sharing and collaboration, ensuring that contributors are fairly rewarded for their efforts. Key components of the DeAI economy include:
AssisterrAI provides a unified infrastructure pipeline to create, tokenize, and distribute Small Language Models (SLMs) in a way that incentivizes all community contributions. The AI Lab allows users to contribute to models in their knowledge area, becoming both co-creators and co-owners of the AI. This approach ensures that AI gig workers not only earn on a one-time, transactional basis but also capture wider market value, securing a better future and making people beneficiaries of AI rather than victims of progress and automation.
To access the platform, users connect a browser-based Solana wallet, as well as their X profile and Discord account. They can then create models through the AI Lab tab of the Assisterr user interface, which offers a simple form to specify key parameters, prompt templates, and model metadata. Users can directly upload data that will be embedded in the model through retrieval augmented generation (RAG) and later through fine-tuning. Once created, the model can be made public through the SLM store. In the future, the AI Lab will adopt a modular, multi-model paradigm with a Mixture of Agents architecture and augmented retrieval strategies.
Assisterr contributors are rewarded for all steps in the genesis of an AI model, from data contribution and model creation to validation and review. This revenue-sharing mechanism is implemented through an SLM tokenization module. The AI Lab effectively connects business use cases with the required data and expertise. Once a model appears in the SLM Store tab of the Assisterr interface, any user can query it through a chatbot interface. Currently, bots assist with various niches in Web3 ecosystems, healthcare, software development, and finance.
Every model in the SLM store comes with a treasury denominated in Assisterr’s native token, which is topped up from the respective user’s balance upon each query. Queries can be placed from the WebUI with a connected Solana wallet or through an API, making models from the SLM store accessible through other applications. Contributors can create SLMs, assemble them into agents, and deploy them through a no-code interface, providing a quick go-to-market period and a fast innovation cycle. This solves the distribution and monetization challenges faced by independent model creators and developers.
Through the Contribute and Earn tab, users can participate in iterative improvements to existing models from the SLM store by fulfilling data requests and validating performance metrics in exchange for management tokens (MTs) or the native Assisterr token. This peer review process ensures constant evolution and increased throughput in model creation over time. Combined with features such as Mixture of Agents (MoA), this allows for cumulative progress and continuous bottom-up tinkering. The modular and specialized nature of SLMs enables rapid integration into existing work pipelines. In the future, businesses or individuals will be able to describe their problems, and Assisterr’s services will involve a relevant pool of SLMs/Agents to find a solution.
The native Assisterr token is the vehicle upon which AssisterrAI ecosystem operations are run. It is transacted in response to the validation of actions taken in fulfillment of smart contract protocols at each stage of the SLM development process. By leveraging the token, participants can engage with the facilities of the Assisterr ecosystem, such as accessing products, paying fees, and contributing to SLMs’ creation, management, and monetization.
Decentralized finance (DeFi) AI agents are a significant innovation in the Web3 space. Moving beyond general-purpose recommender systems, specialized AI operating within safe, permissioned constraints can better optimize and automate financial portfolios. Agentic SLMs, created for rapid-transaction media like Solana DeFi protocols, can enhance lending/borrowing, perpetual trading, and staking. These agents provide better data curation, multimodal reasoning, and deep functional analysis through SLM ensembles and a modern Mixture of Agents (MoA) consortia.
Trading agents, tailored for complex trading scenarios, can analyze wallet clusters and price action trends, proving highly useful in both the volatile DeFi market and traditional finance (TradFi). SLM-based MoA can be particularly effective in data-referenced trading strategies, where the execution medium and method are crucial. These agents enhance trading efficiency and profitability by leveraging advanced algorithms and real-time data.
Autonomous chat agents with advanced learning and analytical capabilities are valuable across academic, social, and professional arenas. They can serve as support proxies for various services, connecting to social networks and IT applications. By incorporating agentic functionality, these conversational support models can act as liaisons, implementing functions based on user feedback and providing actionable support.
SLMs can create text-based, audio-based, or video-based proxies, producing avatars for deep-dive, public-facing tasks. These avatars can handle complex utilities such as 3D avatars, autonomous text-to-video generation, and livestream integrations on social platforms. SLM-based MoA can enhance next-generation multimodal interactions, making public-facing avatars more interactive and effective.
The launch of a specialized Web3 Developer Relations (DevRel) proof of concept on the AssisterrAI platform demonstrated a strong market fit. A robust DevRel regime is essential for engaging developers and providing comprehensive support when adopting a technology stack. However, this comes with substantial costs, with salaries for DevRel roles ranging from $90k to $200k per year. Many developer support requests are predictable and can be automated, increasing DevRel efficiency through the targeted use of SLMs. This approach reduces costs while maintaining high-quality support for developers.
1.Visit the Assisterr Website: Go to Assisterr’s website and click on “Open App”
2.Connect Your Wallet: Click on the “Select Wallet” button and connect your browser-based Solana wallet. This wallet will be used for transactions and accessing various features on the platform.
3.Link Social Accounts: Connect your X profile and Discord account. These connections help verify your identity and integrate your social presence with the Assisterr ecosystem.
4.Complete Registration: Follow the on-screen instructions to complete the registration process. Once registered, you can start exploring the platform and its features.
1.Navigate to the SLM Store: After logging in, go to the SLM Store tab on the Assisterr interface.
2.Browse Available Models: Explore the various Small Language Models (SLMs) available in the store. Each model is designed for specific tasks and industries, such as Web3 ecosystems, healthcare, software development, and finance.
3.Query Models: You can query any model through a chatbot interface. Simply select the model you are interested in and start interacting with it. Queries can be made from the web interface with a connected Solana wallet or through an API for integration with other applications.
1.Access the AI Lab: Go to the AI Lab tab on the Assisterr interface.
2.Specify Model Parameters: Fill out the configuration form to specify key parameters, prompt templates, and metadata for your model. This includes defining the model’s name, handle, description of the purpose, category, cover image, conversation starters, and dataset. You can also Fast-track this process by using the AI assistant.
3.Upload Data: Directly upload data that will be embedded in the model through retrieval-augmented generation (RAG) and fine-tuning. This data helps train the model to perform its intended tasks.
4.Publish your SLM: Once you have configured the model, click the button button. Your model will be generated and you can desire to make it public on the SLM store or keep it private. Making it public allows other users to access and query your model.
Assisterr, a Cambridge-based AI infrastructure startup, successfully closed a $1.7 million pre-seed funding round. This investment round saw participation from prominent Web3 venture funds, including Web3.com Ventures, Moonhill Capital, Contango, Outlier Ventures, Decasonic, Zephyrus Capital, Wise3 Ventures, Saxon, GFI Ventures, X Ventures, Koyamaki, Lucid Drakes Ventures, and notable angels such as Michael Heinrich, Mark Rydon, Nader Dabit, Anthony Lesoismier-Geniaux, and Ethan Francis. The funds have been instrumental in building Assisterr’s foundational infrastructure and launching its platform.
Since its launch, Assisterr has achieved significant milestones, including attracting 150,000 registered users and launching over 60 Small Language Models (SLMs) for leading Web3 protocols like Solana, Optimism, 0g.ai, and NEAR. Additionally, Assisterr has garnered recognition by winning multiple global hackathons and participating in Google’s AI Startups program, securing $350,000 in funding to support its GPU, CPU, and cloud infrastructure needs.
Assisterr has a clear roadmap for future growth and development. Key milestones include:
AI Lab (Q4 2024)
Network Growth (H1 2025)
Mixture of SLM-Agents (H2 2025)
Assisterr is pioneering a new decentralized, community-owned AI era by leveraging Small Language Models (SLMs) and innovative economic models. By addressing the limitations of Large Language Models (LLMs) and promoting a collaborative approach, Assisterr is making AI technology more accessible, efficient, and sustainable. The platform’s comprehensive ecosystem, including AI Labs, the SLM Store, and collaborative elements, empowers users to create, share, and monetize AI models.
Today’s AI industry faces significant challenges due to centralization, with major advancements often controlled by a few large corporations. This leads to concerns about data privacy, monopolistic practices, and limited access to cutting-edge technology. Additionally, the over-reliance on Large Language Models (LLMs) like GPT-3, despite their capabilities, brings issues such as high computational costs, environmental impact, and potential biases in the data they are trained on. These models require vast data and resources, making them accessible only to well-funded organizations.
Assisterr addresses these challenges by introducing Small Language Models (SLMs) and promoting a community-owned approach to AI development. SLMs are designed to be more efficient, requiring less computational power and data while maintaining high performance, making AI technology more accessible and sustainable. Moreover, Assisterr’s community-owned models and AI agents empower users to contribute to and benefit from AI advancements, fostering innovation and inclusivity, and ensuring that the benefits of AI are shared more broadly across society.
Source: Assisterr website
Assisterr AI is a decentralized AI platform designed to democratize access to artificial intelligence by leveraging Small Language Models (SLMs) and community-owned AI agents. Its primary purpose is to provide a more efficient, accessible, and sustainable alternative to traditional AI models, addressing the limitations of Large Language Models (LLMs) and promoting a collaborative AI ecosystem.
Large Language Models (LLMs) like GPT-3 and BERT are AI models trained on vast amounts of text data to understand and generate human-like language. They are capable of performing a wide range of tasks, from text completion to translation and summarization. However, LLMs have several notable shortcomings:
Small Language Models (SLMs), while similar in concept to LLMs, are designed to be more accurate, specialized, and efficient. By focusing on specific tasks and datasets, SLMs provide superior performance for niche applications, making them better suited for specialized use cases. Utilizing tailored datasets and focusing on specific business needs, SLMs can deliver superior performance and situational adaptability at a fraction of the cost. This is also encouraging for open-source SLM building, where cheaper projects have previously developed SLMs with competitive accuracy to veteran LLMs at much lower costs.
Small Language Models (SLMs) are at the core of Assisterr’s technology. Unlike Large Language Models (LLMs), SLMs are designed to be more efficient and specialized. They focus on specific tasks and datasets, which allows them to deliver superior performance for niche applications. This specialization makes SLMs more accessible and sustainable, as they require less computational power and data.
To address the limitations of LLM-based agents, advanced approaches have emerged involving multiple small language models (SLMs) working in collaborative agentic frameworks. Two core approaches are leveraged when developing AI agents from SLM ensembles: Mixtures of Experts (MoE) and Mixtures of Agents (MoA).
Mixtures of Experts (MoE)
Source: Assisterr Litepaper
When combined in MoE ensembles, modern SLM reasoning can achieve enhanced learning flexibility without losing its capacity for functional problem-solving. Ensemble learning can combine the reasoning skills of multiple smaller models, each specialized in different associated contexts, to solve complex problems. This generates a hybrid comprehension that continues to allow the AI to deep-dive. Layers of experts can themselves be composed of MoEs, creating hierarchical structures to buffer contextual complexity and problem-solving proficiency further. An MoE typically uses a sparse gating layer that dynamically selects among several parallel networks to give the most appropriate response to the prompt. To achieve more flexible responses, individual experts could be fine-tuned for code generation, translation, or sentiment analysis. More sophisticated MoE architectures may contain several such MoE layers in combination with other components. Like any typical language model architecture, the MoE gating layer operates on semantic tokens and requires training.
Mixtures of Agents (MoA)
When assembled into MoA architectures, SLMs enhance the selectivity of diversified reasoning ensembles, enabling AI to enact precise execution of a task with the required methodology. Agentic models are assembled in a consortium that layers execution protocols to improve the efficiency and problem-solving of complex tasks. The AI therefore works in multi-domain scenarios. Teams of agents can work in sequence, iteratively improving upon previous results. MoA has previously significantly outperformed larger models, including GPT-4 Omni’s 57.5% accuracy score on AlpacaEval 2.0, even in open-source models. A Mixture of Agents (MoA) operates on the level of model outputs, not semantic tokens. It does not feature a gating layer but forwards the text prompt to all agents in a parallelized manner. Outputs of the MoA are also not aggregated by addition and normalization. Instead, they are concatenated and combined with a synthesize-and-aggregate prompt before being passed on to a separate model to produce the final output. The models are thus divided into “proposers” that compute diverse outputs and “aggregators” that integrate the results. Just like for MoE, several of these layers can be combined. The lack of gating layers makes this approach more flexible and adaptable to complex tasks.
The DeAI (Decentralized AI) economy is a fundamental aspect of Assisterr’s platform. It leverages blockchain technology to create a decentralized marketplace for AI models and data. This economy incentivizes data sharing and collaboration, ensuring that contributors are fairly rewarded for their efforts. Key components of the DeAI economy include:
AssisterrAI provides a unified infrastructure pipeline to create, tokenize, and distribute Small Language Models (SLMs) in a way that incentivizes all community contributions. The AI Lab allows users to contribute to models in their knowledge area, becoming both co-creators and co-owners of the AI. This approach ensures that AI gig workers not only earn on a one-time, transactional basis but also capture wider market value, securing a better future and making people beneficiaries of AI rather than victims of progress and automation.
To access the platform, users connect a browser-based Solana wallet, as well as their X profile and Discord account. They can then create models through the AI Lab tab of the Assisterr user interface, which offers a simple form to specify key parameters, prompt templates, and model metadata. Users can directly upload data that will be embedded in the model through retrieval augmented generation (RAG) and later through fine-tuning. Once created, the model can be made public through the SLM store. In the future, the AI Lab will adopt a modular, multi-model paradigm with a Mixture of Agents architecture and augmented retrieval strategies.
Assisterr contributors are rewarded for all steps in the genesis of an AI model, from data contribution and model creation to validation and review. This revenue-sharing mechanism is implemented through an SLM tokenization module. The AI Lab effectively connects business use cases with the required data and expertise. Once a model appears in the SLM Store tab of the Assisterr interface, any user can query it through a chatbot interface. Currently, bots assist with various niches in Web3 ecosystems, healthcare, software development, and finance.
Every model in the SLM store comes with a treasury denominated in Assisterr’s native token, which is topped up from the respective user’s balance upon each query. Queries can be placed from the WebUI with a connected Solana wallet or through an API, making models from the SLM store accessible through other applications. Contributors can create SLMs, assemble them into agents, and deploy them through a no-code interface, providing a quick go-to-market period and a fast innovation cycle. This solves the distribution and monetization challenges faced by independent model creators and developers.
Through the Contribute and Earn tab, users can participate in iterative improvements to existing models from the SLM store by fulfilling data requests and validating performance metrics in exchange for management tokens (MTs) or the native Assisterr token. This peer review process ensures constant evolution and increased throughput in model creation over time. Combined with features such as Mixture of Agents (MoA), this allows for cumulative progress and continuous bottom-up tinkering. The modular and specialized nature of SLMs enables rapid integration into existing work pipelines. In the future, businesses or individuals will be able to describe their problems, and Assisterr’s services will involve a relevant pool of SLMs/Agents to find a solution.
The native Assisterr token is the vehicle upon which AssisterrAI ecosystem operations are run. It is transacted in response to the validation of actions taken in fulfillment of smart contract protocols at each stage of the SLM development process. By leveraging the token, participants can engage with the facilities of the Assisterr ecosystem, such as accessing products, paying fees, and contributing to SLMs’ creation, management, and monetization.
Decentralized finance (DeFi) AI agents are a significant innovation in the Web3 space. Moving beyond general-purpose recommender systems, specialized AI operating within safe, permissioned constraints can better optimize and automate financial portfolios. Agentic SLMs, created for rapid-transaction media like Solana DeFi protocols, can enhance lending/borrowing, perpetual trading, and staking. These agents provide better data curation, multimodal reasoning, and deep functional analysis through SLM ensembles and a modern Mixture of Agents (MoA) consortia.
Trading agents, tailored for complex trading scenarios, can analyze wallet clusters and price action trends, proving highly useful in both the volatile DeFi market and traditional finance (TradFi). SLM-based MoA can be particularly effective in data-referenced trading strategies, where the execution medium and method are crucial. These agents enhance trading efficiency and profitability by leveraging advanced algorithms and real-time data.
Autonomous chat agents with advanced learning and analytical capabilities are valuable across academic, social, and professional arenas. They can serve as support proxies for various services, connecting to social networks and IT applications. By incorporating agentic functionality, these conversational support models can act as liaisons, implementing functions based on user feedback and providing actionable support.
SLMs can create text-based, audio-based, or video-based proxies, producing avatars for deep-dive, public-facing tasks. These avatars can handle complex utilities such as 3D avatars, autonomous text-to-video generation, and livestream integrations on social platforms. SLM-based MoA can enhance next-generation multimodal interactions, making public-facing avatars more interactive and effective.
The launch of a specialized Web3 Developer Relations (DevRel) proof of concept on the AssisterrAI platform demonstrated a strong market fit. A robust DevRel regime is essential for engaging developers and providing comprehensive support when adopting a technology stack. However, this comes with substantial costs, with salaries for DevRel roles ranging from $90k to $200k per year. Many developer support requests are predictable and can be automated, increasing DevRel efficiency through the targeted use of SLMs. This approach reduces costs while maintaining high-quality support for developers.
1.Visit the Assisterr Website: Go to Assisterr’s website and click on “Open App”
2.Connect Your Wallet: Click on the “Select Wallet” button and connect your browser-based Solana wallet. This wallet will be used for transactions and accessing various features on the platform.
3.Link Social Accounts: Connect your X profile and Discord account. These connections help verify your identity and integrate your social presence with the Assisterr ecosystem.
4.Complete Registration: Follow the on-screen instructions to complete the registration process. Once registered, you can start exploring the platform and its features.
1.Navigate to the SLM Store: After logging in, go to the SLM Store tab on the Assisterr interface.
2.Browse Available Models: Explore the various Small Language Models (SLMs) available in the store. Each model is designed for specific tasks and industries, such as Web3 ecosystems, healthcare, software development, and finance.
3.Query Models: You can query any model through a chatbot interface. Simply select the model you are interested in and start interacting with it. Queries can be made from the web interface with a connected Solana wallet or through an API for integration with other applications.
1.Access the AI Lab: Go to the AI Lab tab on the Assisterr interface.
2.Specify Model Parameters: Fill out the configuration form to specify key parameters, prompt templates, and metadata for your model. This includes defining the model’s name, handle, description of the purpose, category, cover image, conversation starters, and dataset. You can also Fast-track this process by using the AI assistant.
3.Upload Data: Directly upload data that will be embedded in the model through retrieval-augmented generation (RAG) and fine-tuning. This data helps train the model to perform its intended tasks.
4.Publish your SLM: Once you have configured the model, click the button button. Your model will be generated and you can desire to make it public on the SLM store or keep it private. Making it public allows other users to access and query your model.
Assisterr, a Cambridge-based AI infrastructure startup, successfully closed a $1.7 million pre-seed funding round. This investment round saw participation from prominent Web3 venture funds, including Web3.com Ventures, Moonhill Capital, Contango, Outlier Ventures, Decasonic, Zephyrus Capital, Wise3 Ventures, Saxon, GFI Ventures, X Ventures, Koyamaki, Lucid Drakes Ventures, and notable angels such as Michael Heinrich, Mark Rydon, Nader Dabit, Anthony Lesoismier-Geniaux, and Ethan Francis. The funds have been instrumental in building Assisterr’s foundational infrastructure and launching its platform.
Since its launch, Assisterr has achieved significant milestones, including attracting 150,000 registered users and launching over 60 Small Language Models (SLMs) for leading Web3 protocols like Solana, Optimism, 0g.ai, and NEAR. Additionally, Assisterr has garnered recognition by winning multiple global hackathons and participating in Google’s AI Startups program, securing $350,000 in funding to support its GPU, CPU, and cloud infrastructure needs.
Assisterr has a clear roadmap for future growth and development. Key milestones include:
AI Lab (Q4 2024)
Network Growth (H1 2025)
Mixture of SLM-Agents (H2 2025)
Assisterr is pioneering a new decentralized, community-owned AI era by leveraging Small Language Models (SLMs) and innovative economic models. By addressing the limitations of Large Language Models (LLMs) and promoting a collaborative approach, Assisterr is making AI technology more accessible, efficient, and sustainable. The platform’s comprehensive ecosystem, including AI Labs, the SLM Store, and collaborative elements, empowers users to create, share, and monetize AI models.