m a i
BlogSeriesLearningAbout
Home
Blog
Series
Learning
About

Transformer

Filter by tag

database(9)data-warehouse(8)data-engineering(4)llms(4)language-models(3)data-visualization(2)data-integration(2)etl(2)data-science(2)dimensional-modeling(2)rag(2)langchain(2)vector-databases(2)statistics(2)probability(2)big-data(1)hadoop(1)kafka(1)spark(1)elt(1)cdc(1)data-architecture(1)data-mesh(1)data-lake(1)hierarchy(1)data-modeling(1)normalization(1)er-diagram(1)transformer(1)fine-tuning(1)machine-learning(1)naive-bayes(1)data-management(1)data-driven(1)data-aware(1)data-inform(1)pinecone(1)
  • December 12, 2024

    From Transformer to LLMs

    LLMsLanguage-ModelsTransformerFine-tuningRAG

    This article explores the evolution from Transformers to Large Language Models (LLMs), detailing the mechanisms of self-attention and multi-head attention, the role of position embeddings, various types of transformer models, and the training and fine-tuning processes of LLMs.

    Blog Cover
    Read more →
  • © 2025
  • •
  • m a i
  • •
  • resume
    mailMailgithubGitHublinkedinLinkedin