To support its newly introduced Watsonx data science platform, IBM has announced the addition of new generative AI models and capabilities. These developments are expected to boost the platform's functionality and open up AI to a wider range of sectors.

The recently released IBM AI models referred to as the Granite series models, are comparable to well-known large language models (LLMs), such as OpenAI's GPT-4 and ChatGPT. These models can create, summarize, and evaluate text, according to TechCrunch.

Moreover, IBM's Granite series models will be different since the corporation will provide the data needed to train them and the methods involved in filtering and processing it. These details will be released before the models' Q3 2022 release.

How Does It Work?

The IBM Research-developed Granite models, which use a "Decoder" architecture, include Granite.13b.instruct and Granite.13b.chat. Modern big language models' ability to predict the next word in a series is key to their predictive powers.

According to the tech firm blog, the IBM Granite models' efficiency is one of their standout qualities; 13 billion parameters may be processed by a single V100-32GB GPU.

These models can perform very well in specific business-domain activities like summarization, question-answering, and categorization because of their efficiency, which also lessens the influence they have on the environment.

They offer additional natural language processing (NLP) activities such as content creation, insight extraction, retrieval-augmented generation (RAG), and named entity identification, in addition to having widespread applications across several sectors.

Read Also: New Chips in Huawei Phones Spark Controversy Over Alleged Trade Violations

To train these models, IBM created specialized datasets totaling 1 trillion tokens from 7 terabytes of data before pre-processing and 2.4 terabytes of data after pre-processing. These datasets included subjects including the internet, academia, programming, law, and finance.

This method guarantees that the models are knowledgeable in the language and vocabulary particular to the sector, making them significant assets for organizations.

Update on IBM's WatsonX.data

IBM is enhancing generative AI capabilities in its Watsonx.data platform in addition to the Granite models, enabling users to quickly explore, enrich, display, and improve data for AI applications via a chatbot-like interface. This feature is scheduled to go live soon, TS2 reported.

In Watsonx.data, IBM is also providing a vector database feature to facilitate retrieval-augmented generation (RAG). By linking huge language models to outside knowledge sources, RAG improves the quality of the replies produced by these models, which is an important benefit for corporate clients.

IBM is introducing Watsonx.governance in response to rising concerns about consumer privacy, model bias, drift, and moral AI practices. This toolkit will provide techniques to protect client information, spot bias and drift in models, and guarantee adherence to ethical norms. IBM will shortly launch Intelligent Remediation, which uses generative AI models to help IT workers summarize issues and recommend solution strategies.

Working with customers to assist in their AI journey from fundamental data strategy to model tuning and governance, IBM is presenting itself as a transformation partner. This development comes as IBM is under growing pressure to show that it is a force to be reckoned with in the AI industry, particularly in light of Q2 revenue reports that fell short of analyst estimates.

These new technologies seek to reinforce IBM's position in the expanding AI ecosystem and deliver relevant AI solutions for organizations across varied industries.

Related Article: Roku Slashes Workforce by 10%, Limits Hiring Amid Cost-Cutting Measures

byline -quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion