cancel
Showing results for 
Search instead for 
Did you mean: 

UNVEILING THE INNOVATIONS OF ARTIFICIAL INTELLIGENCE

SupriyaB1108
Newcomer
0 Kudos

Artificial Intelligence (AI) is changing the ways we live and work. AI technologies, including machine learning and natural language processing, are transforming industries by automating complex tasks, analysing large datasets, and enhancing decision-making processes. As AI continues to evolve, it promises to revolutionize how we interact with technology, solve problems, and make informed decisions. My learning journey with AI was very interesting as it empowered with the knowledge and skills needed to harness the potential of cutting-edge AI technologies like ChatGPT, GPT models and large language models (LLMs).

In the learning process I came through the nuance of GPT models, LLMs, Neural Networks and its effective implementation and limitations. There are numerous AI chatbots currently like ChatGPT, Google Gemini, Microsoft Copilot etc, for using the chatbots effectively, Prompt engineering is important, as good prompts will lead to good results. It’s an iterative process to improve the AI’s accuracy and effectiveness. It creates better service so that the chatbots can handle complex tasks like code generation, logic building etc. For example, using ChatGPT,

What is the difference between a List and array list in C#

Write a professional summary for a technical lead.

Share an example.

Now trim it down to less than 60 words

Rewrite in less technical words

 

We currently have GitHub Copilot which integrated with our visual studio, which can generate code when we provide a description of the problem statement.

With the Lang Chain, I learnt to understand the components including LLM wrappers, prompt templates, chains, and agents. LLMs are pure text completion models where it takes input as string and provides result as string. Whereas the chat models use a language model as a base, it accepts a list of chat message as input and return a chat message.

response = llm.invoke("List the seven wonders of the world.")

print(response)

Lang Chain's PromptTemplate class simplifies the creation of dynamic prompts by leveraging Python's “str.format” syntax. This allows for the definition of templates with placeholders, which can then be replaced with specific values to generate tailored prompts.

from langchain.prompts import ChatPromptTemplate

chat template = ChatPromptTemplate.from_messages([

        ("system", "You are a .NET expert named {name}."),

        ("user", "Hello, can you help me with a .NET issue?"),

        ("expert", "Of course! What is your problem?"),

        ("user", "{user_issue}"),

    ])

formatted_conversation = chat_template.format_messages(name="Charlie", user_issue="I'm having trouble with async/await in C#.")

for message in formatted_conversation:

    print(message)

Retrieval involves extracting relevant data from a corpus based on a query, using techniques like keyword matching or semantic search. It's essential for NLP applications to find and use existing knowledge efficiently.

from langchain.document_loaders import TextLoader

loader = TextLoader("./sample.txt")

document = loader.load()

 

LangChain's memory system is crucial for enhancing conversational interfaces by remembering past interactions. It involves storing and querying information, with two main actions: reading and writing. The system interacts twice during a run, enhancing user inputs and preserving them for future use.

There are two types of storing conversations.

Storing Chat Messages: It uses methods like in-memory lists and databases to record all chat interactions, ensuring they are available for future reference.

Querying Chat Messages: The system employs data structures and algorithms to create a useful view of stored messages. This can involve returning recent messages, summarizing past interactions, or focusing on entities mentioned in the current interaction.

I also came across the importance of vector database using Pinecone. The high-performance vector search capabilities can be used to store and retrieve embeddings generated by LangChain, allowing for more accurate and contextually relevant responses in conversational interfaces.

 

During this educational journey, I embarked on a project that enhanced my comprehension of how OpenAI and Long Chain facilitate the process of manually reading a book to locate straightforward answers. Using the APL stack (Open AI, Pinecone and Long Chain).

 

Below are following steps:

  1. Preparing the search data by loading documents,
  2. Splitting them into chunks
  • Embedding these chunks into numeric vectors using an embedding model like OpenAI's text-embedding-ada-002, and
  1. Saving these chunks and embeddings to a vector database such as Pinecone.

For a user query, an embedding is generated for the question, and the chunk embeddings are ranked by similarity to the question's embedding. The most relevant chunks are then used to ask the model, which returns an answer. This technique, known as retrieval augmentation, retrieves relevant information from an external knowledge base to enhance the model's capabilities.

This approach allows for the development of a question-answering application on custom data, enabling the model to access and respond to information beyond its training data.

In conclusion, AI has brought both great benefits and challenges. It helps in many areas like health, jobs, and learning. But, it also raises concerns about jobs, privacy, and misuse etc. It is important to use AI responsibly and ethically. Despite these challenges, AI has a lot of potential to help people and solve big problems. As we use AI, we should work together to make sure it benefits everyone and is used safely.

Accepted Solutions (0)

Answers (0)