AI Models

Transformative solutions that will improve your business today...

Meet Poolnoodle

Poolnoodle is our local hero. It's the name given to our family of in-house trained Large Language Models (LLMs). Maintaining our own models allows maximum customization of our models to fit your needs.

Our Noodles

Poolnoodle-BigNoodle is our biggest standard transformer model, featuring about 75 billion parameters. It's a general purpose model, trained in many languages and many generic capabilities.

Poolnoodle-BigNoodle's strong points are general reasoning and a deep general knowledge. With this, the model is capable of making complex decisions.

Poolnoodle-BigNoodle is a model that is trained to be helpful and explanative, with the appropriate instructions, it's a great model for an AI chatbot.

With a standard context length of 8192 tokens without context-extension measures, it's capable of holding pretty long conversations and can parse considerable amount of text.

With 75 billion parameters, it's not the most lean model and as such it's not the fastest. Use Poolnoodle-BigNoodle when you need deep reasoning and deep insights.

Poolnoodle-CodeNoodle is a Large Language Model derived from Code Llama, but has been trained and fine-tuned by ScaiLabs. It's based on the Llama 2 13B model and has about 13 billion parameters.

With a stretched context length of 16000 tokens, Poolnoodle-CodeNoodle is capable of handling pretty big queries.

Poolnoodle-CodeNoodle has been extensivly trained on several languages, to both interpret existing code and to write code based on your descriptions. Please notice that while it can solve pretty complex coding problems based on text, it has no real visual capabilities.

Poolnoodle-LongNoodle is a colleague model of Poolnoodle-BigNoodle, but with far less parameters. It has been trained on mostly the same datasets though. The difference between LongNoodle and BigNoodle is that LongNoodle supports more than 128K tokens in its context length.

With this large context length, Poolnoodle-LongNoodle's primary expertise is to read, process and summarize big chunks of text and data.

Poolnoodle-LongNoodlei is capable of extracting information from formatted text like HTML, markup and other structured text formats, as long as the source material is clear text.

The context length of Poolnoodle-LongNoodle can be extended even further via RoPE Scaling, although this comes at a huge expense of memory.

Poolnoodle-FixerNoodle is an LLM trained to route requests between LLMs and plugins. It's a model13 billion model, based on the original Poolnoodle model, which has its roots in the Vicuna 13B model.

The model has been trained to interact with several so called "plugins", which allow the model to use external resources.

The model has been optimized for speed, rather than context length or deep understanding. It's capable of summarizing information within its context window.

Poolnoodle-ToolNoodle is a model featuring 13 Billion parameters that has been trained to interact with APIs. It has been trained to understand both API documentation, but also API definitions like OpenAPI/Swagger, RAML and SOAP.

Poolnoodle-ToolNoodle isn't only capable of understanding those APIs and explain them to you, it is also able to formulate and with the right permissions, execute requests on those APIs and interpret the results, if necessary.

Poolnoodle-ToolNoodle can be used stand-alone, but also has been optimized to be used in a chain of other Large Language Models.

Poolnoodle-BabyNoodle is our smallest LLM model, optimized to be able to run on embedded systems. This model has been built from the ground up by ScaiLabs and has been trained on a set of basic capabilities.

The primary language the model supports is English, with good understanding of German and French. While the model has some understanding of Dutch, it isn't good enough to hold a conversation in Dutch yet.

The model has been trained to interact with Poolnoodle-BigNoodle, Poolnoodle-FixerNoodle and Poolnoodle-ToolNoodle if its own capabilities aren't sufficient to complete the request. Furthermore its capable to call upon a set of local plugins in order to execute calls.