Why do we even need Transformer model in AI?

povashraful
3 min readOct 26, 2024

--

Hello everyone! 👋 Whether you’re an upcoming grad student or an undergrad, you’ve probably heard the term “Transformer” at least once. Maybe you’ve even studied it. But have you ever wondered why we needed such a model, or what makes it so revolutionary? As students of science, curiosity should be our driving force, right? Implementing what’s already been done is one thing but understanding the why behind it — and pushing the boundaries a bit further — is where real learning happens. So, let’s dive in and start unraveling the story behind the Transformer model! I’ll keep it concise while giving you a clear picture. Let’s begin!

The Journey Begins

The story of Transformers begins with a basic human ambition: trying to “talk” to computers. Yes, you read that right! This journey started in the early 2010s, with researchers looking for ways to enable natural communication with machines, using words and sentences. But computers don’t understand words like we do; they only speak in 0s and 1s. So, how could we bridge that gap?

The First Step: Sequence-to-Sequence Models

In 2014, researchers introduced a model that would kick off this journey. The paper “Sequence to Sequence Learning with Neural Networks” described a way for computers to handle sequence-to-sequence tasks, like translating a sentence from English to Bengali. The model could take a sequence of words (an English sentence) and generate a translated sequence (its Bengali equivalent). Internally, it mapped words to numbers, processed them, and then converted those numbers back to words. This was groundbreaking! For the first time, computers could produce word-based outputs from word-based inputs — an essential step towards human-like communication.

However, while promising, this model wasn’t perfect. It could handle short sentences but struggled with longer inputs. Translations for longer sentences became less reliable, as the model wasn’t able to capture the full context. It was like trying to read a novel through a narrow slit.

Enter the Attention Mechanism

This is where the attention mechanism, introduced in “Neural Machine Translation by Jointly Learning to Align and Translate,” brought about a major improvement. Attention allowed the model to dynamically focus on specific parts of the input — like shining a spotlight on the important words. This approach allowed the model to prioritize the most relevant parts of a sentence, making it easier to handle longer inputs. Translation quality improved, and for the first time, machines could reliably translate long paragraphs and capture essential context.

This was just the beginning but look how far we’ve already come. I will be discussing more details in the upcoming posts. In the next part, we’ll explore how the introduction of the Transformer architecture in 2017 revolutionized everything further, leading to the powerful models we know today. Stay tuned!

[This post was refined usingChatGPT]

--

--

povashraful
povashraful

Written by povashraful

0 Followers

My name is Md. Ashraful Hossain, a recent Computer Science and Engineering graduate. I love computers and like to keep myself busy with it.

No responses yet