Core Concepts

Types of No-code AI tools

Breaking down the different No-code AI tool types available and their purpose.
In: Core Concepts

In this post, I would like to break down the different No-code AI tool types available and their purpose.

1. Drag and Drop or flow builder tools

In most data science projects what you actually build is a pipeline. A data science pipeline is a set of processes that convert raw data into actionable answers to business questions. Once you create a pipeline, you can use it to automate the flow of data from source to destination, ultimately providing you insights for making business decisions. A well-functioning AI workflow works when it is put into production, and pipelines are the core construct for that.

Most machine learning libraries in Python and R allow to the creation of such pipelines. In the image below you can see a simplified diagram of an AI pipeline. You select data sources, do the proper transformations, train your models and adjust their parameters.

Simplified Data Science pipeline diagram from Data Connection to Deployment

What you would usually do next is convert that pipeline into a web service that you can call from your end applications or dashboards.

The first type of No-code AI tool (and one of my favorites) is what I like to call Flow Builder or Visual Modeling. You basically build the pipeline visually instead of using code, and every node in the canvas represents a transformation or step in the pipeline.

The user interface of these tools improved a lot in the last couple of years and most of them allow integration of R or Python in case you need extra customization. These pipelines can get pretty complex but the UX helps you organize them and easily explain to the business what you are actually doing. You also have automation tools that will help you select the best algorithm or create data features automatically.  Also productionizing these pipelines is extremely easy and usually can be done using No-code.

Examples of those tools are IBM SPSS Modeler (my favorite!), Knime, Alteryx, or Rapidminer. A possible concern for some companies is that these pipelines usually are not built on Open Source, so you are “locked in” with the vendor. Some of them offer options to export the visual flows into code, but with certain limitations.

Examples of Flow Building interfaces: Left, Knime and right, IBM SPSS Modeler


2. AutoML

As you’ve seen before, those pipelines have many steps and can get very complex quickly. Most of the data scientist's time is spent in the data transformation process in order to get the data ready to train your models. That includes data cleaning, finding new features, or formatting it correctly so the algorithms can learn from it.

In the model training step, there is a lot of experimentation and tuning that needs to be done. Select the right algorithm among the multiple options available, hyper-parameter configuration on each of them, and more. To show it visually, the diagram below exposes how AutoML simplifies the process of Machine Learning workflow:

Traditional vs Auto ML Workflow
Examples of tools in the AutoML space are :

AutoML tools come with stunning UX to help assess and trust the results. IBM’s tool also allows exporting all the pipelines generated into a Python notebook that a Data Scientist can easily pick up and use as a starting point, giving them superpowers and saving them a lot of time!

Left, DataRobot AutoML user interface and right, IBM AutoAI user interface

There are some recent articles out there that talk about AutoML replacing data scientists, a statement that is far from true. AutoML allows data scientists to build models with a high degree of automation and conduct hyperparameter searches over different types of algorithms, which can otherwise be time-consuming and repetitive. Teams can reduce the amount of time required to create functional models. It reduced the complexity that comes with building, testing, and deploying entirely new ML frameworks, AutoML streamlines the processes required to solve line-of-business challenges. It is true that some of the new tools can be used by business users by simply making them define the business problem and direct the action but that will help data scientists focus on harder and more specialized problems.

There are new startups raising investments to continue pushing the innovations and simplification of AI forward to enable business users. A good example of that is Obviously.ai, a startup that just raised $3.6M and is built for business users that are not data scientists. Another one is Pecan.ai which just raised $35M a few days ago. Both have stunning products with a lot of potential. Another one to follow closely is Akkio, which claims you can go from data to AI in 10 minutes with no code or data science skills required.

On the left we have Obviously.ai and on the right Pecan.ai, both showing how making Churn prediction AI building very simple for business users

3. Pre-trained APIs

What if you have AI use cases but no data scientists or data to train models? No problem! There are very powerful models accessible via APIs that are already trained for you. Those models are very complex and usually big organizations and research labs such as IBM, AWS, OpenAI or Google do the heavy lifting training the models with very talented AI experts and they provide the service for you to consume. They come with limitations and sometimes with lack of flexibility, but they cover some of the most common use cases and can be consumed right away!

These are some examples of Use Cases you can cover:

  • Visual Recognition and Object Detection
  • Facial Recognition
  • Vision OCR to extract text from documents or images
  • Text classifiers that can be used for sentiment analysis, urgency detection in support tickets or Abuse Detection
  • Keywords and Entity extractors
  • Text Translation
  • Text Generation
  • Speech-to-text or Text-to-speech

What’s also powerful is that most of these APIs also take care of testing the accuracy of the models and make sure the training sets used are not biased or unfair - so you don’t have to do it. If that’s a concern, they provide reports with benchmarks. Popular vendors offering these trained AI APIs are IBM Watson, Google AI, AWS or startups like MonkeyLearn or Clarifai to name a few.

4. Transfer learning tools

If the AI APIs can give you a starting point but are not good enough and you have data available for training, then transfer learning tools can be your best friend. These tools allow users with limited machine learning expertise to train high-quality models, giving you a head start and adding “knowledge” to previously trained models.

These systems already have an automatic deep transfer learning (meaning that it starts from an existing deep neural network trained on other data) and neural architecture search (meaning that it will find the right combination of extra network layers) for use cases such as language pair translation, natural language classification, and image classification.

For example, you can use a general Computer Vision model to classify vehicles such as cars, trucks, trains, bicycles, etc. but it might not be very good at classifying cars by car makers. Well with these tools you can retrain the computer vision model adding that additional knowledge. This helps because you don’t need that much training data labeled and you don’t need to figure out the neural network architecture. Also, those tools are usually No-code and very user-friendly, you will need a bit of patience labeling the data which is a manual step.

Right, IBM Watson Image Recognition training tool, and left, Google AutoML Vision tool
A more advanced example of transfer learning is learning in simulators and then transferring that knowledge to real life. For example in Self-Driving cars, many driving situations are previously trained in 3D simulators, which is safer and can be parallelized to speed up the innovation. This goes out of scope for our No-code AI community but it is just too awesome to not mention. I recommend you check out the Unity Simulation solution, which is using the engine to create video games but is applied to build environments to train, test, and validate AI in a real-time 3D virtual world.

Udacity Self-Driving car simulator based on GTA V

The diagram below explains how transfer learning works and why it has been one of the main drivers to accelerate new use cases using Machine Learning in the industry.

Left, how transfer learning works, and right, transfer learning driving success in commercial use cases

5. AI App & Dashboard builder

The end goal of AI is to solve real problems and that means making those models available to business users, subject matter experts, and front-line decision makers to leverage AI predictions generated by the models.

This category is focused on the creation of turning any model into AI applications without coding. You can build dashboards or Apps using Drag and Drop widgets, data visualizations, and powerful pre-built templates that enable the creation and deployment of powerful new AI apps in minutes. You don't need to talk to several teams and IT departments in order to make this happen, which simplifies, even more, the process -and it can also add security concerns depending on the organization.

For No-code App development there are many options out there. Copy.ai created its App using Webflow, which pushed its purpose a bit too much since it is designed for No-code web development. Bubble.io is a great option.

If you need to display results in a dashboard, BI vendors such as Cognos, Tableau, PowerBI, or Looker allow business users to embed AI models in the reports. They will also use AI assistants to help you build the best visualizations and make those dashboards more narrative and easy to consume.

In the Low Code space, there is Plotly Dash, which converts data science Python scripts into production-grade apps. In the “no-code/low-code” there is also Palantir, which provides a very advanced full-stack platform with ontology management and integration with advanced data platforms.

There are a lot of options!

I hope this breakdown by category makes sense and please let me know if I should add any other category. I didn’t include all the vendors or tools available in the market, there are many and all exciting, and most of the time, to build an end-to-end AI solution you would need to combine them.

Written by
Armand Ruiz
I'm a Director of Data Science at IBM and the founder of NoCode.ai. I love to play tennis, cook, and hike!
More from nocode.ai

Introducing Large Vision Models - LVMs

LLMs have transformed text processing in AI and machine learning. Now, Large Vision Models (LVMs) are emerging, set to similarly revolutionize image processing and interpretation.

The History of AI

Foundation models are pivotal in AI evolution and essential in today's tech. In this post, we will understand AI's history is key to its future direction.

Accelerate your journey to becoming an AI Expert

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to nocode.ai.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.