Pic credit: The above image is from pexels by Taryn Elliott

What is Generative AI?
Generative AI is that type of AI which can generate data, such as text images, code etc. The data is generated after being trained on huge data sets as inputs. But to provide a specific output, the user needs to provide specifics ex:
- role: Act as a senior developer,
- output format: a python file
- problem: Write python code to generate prime numbers
Generative AI with Ollama:
If you are a rookie tester and curious about Generative AI, you can start with Ollama.
What is Ollama?
Ollama is a LLM (a large language model)
The best part:
- It’s open source
- It’s local too
Alright, here I am after a gap of a few months. Gen AI is creating a lot of buzz. While you have several names like ChatGpt, Perplexity, Google Gemini etc. doing the rounds wait… DeepSeek. Eeeek! Some folks did get scared for a while
As a beginner, one should be concerned about privacy issues. You need to issue a prompt which contains details of the task you want to get done ex:
Prompting can be of different types: Generic, and Specific (a. If it has one example then it could be one-shot prompting. b. If it has more it is multi-shot prompting
One-Shot Prompting:
Please provide me with Python Code to generate prime numbers from 1 through 100.
I will go through the output of this prompt a little later, but there could be some concerns
Such as:
Can the details reveal too much of what should be private information of your organization?
Can that information be used against you or your company?
The solution could be an Open Source alternative called Ollama
Ollama can run locally on your machine, and it can run without the internet
The installation and setup of Ollama on windows 11 is not exactly a breeze as I thought it would be because:
The Windows download ollama.exe and it’s installation went through but it did not launch
I used a WSL2 to download and installed ollama.
At the end it told me it saw an NVIDIA GPU on my system -😊
You also need to download a model, the latest one being llama3.3. Once it was downloaded, I ran it and saw that I do not have enough memory to run this model. – 😒
I thought I would be warned during the download itself. So I removed it.
So, I went a step down and downloaded llama 3.0 – same response – 😒
Finally, I downloaded llama2 and it went well – 😌
Meanwhile, I also tried installation on my linux Zorin with no GPU, only CPU and 6 GB Ram and none of the above nor a so called small and efficient “phi” did not run due to insufficient memory 😞
For a beginner, my advice would be that if you have around 16GB of RAM and a GPU a llama2 could be the way to start.
For the bigger models I think you need to have a seperate machine a kind of A CPU/GPU setup with lot of RAM etc.
Coming back to the kind of generic prompt that I spoke about earlier in this post (Python code to generate prime numbers), I got the following code.
def generate_primes(stop): primes = [] for i in range(1, stop+1): is_prime = True for j in range(2, int(i**0.5) + 1): if i % j == 0: is_prime = False break if is_prime: primes.append(i) return primesprint(generate_primes(100))
The output I got was:
[1, 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
Although the range that was mentioned was 1 through 100, 1 was included as a prime number – A red flag? Err.. Maybe a yellow flag
Final take on Generative AI with Ollama:
Not making any judgements, but simply relying on a Generative AI to spit out code for you without knowing the programming language or programming as such could lead to more time in fixing the issue than actually being productive after all.