First safety benchmark released for LLMs, assesses problematic output

US-based AI consortium MLCommons announced a new safety benchmark for LLMs, dubbed the AI Safety v0.5 proof-of-concept, on Tuesday.

The benchmark focuses on measuring the safety of general purpose LLMs in specific hazard areas. The benchmark specifically assesses AI chat models used for English text interactions, primarily in North America and Western Europe.

According to the consortium’s AI Safety working group, the benchmark includes a number of tests, wherein the engine interrogates the LLM it is assessing to gauge its responses, following which the model is rated based on its performance.

“The MLCommons AI Safety working group, with its uniquely multi-institutional composition, has been developing an initial response to the problem, which we are pleased to share today,” said Percy Liang, co-chair of the working group.

MLCommons was also responsible for MLPerf, one of the leading benchmarks for AI performance.

Likewise, the AI safety benchmark could become a major tool for assessing the safety of an LLM. However, the benchmark is not yet completely released.

The POC is currently open for community experimentation, following which, based on feedback, the consortium aims to release the full v1.0 later this year.

Currently, the number of “hazards” that the benchmark tests for is limited, including testing for child sexual exploitation, creation of mass destruction, enabling and encouraging of criminal behaviour, hate and self-harm. Totally, 13 categories of hazardous topics have been identified, subject to increase with the release of the full version.

The POC has as many as 43,000 test prompts, based on which it categorises the responses from the LLM being tested (SUT) using Meta’s Llama Guard. Based on these classifications, coming from several hundred community submitted tests, assigns them certain safety ratings from high risk to low risk.

This is the first time a benchmark has been pushed to assess potentially hazardous output from major chatbots. Concerns on this have been raised several times, with research ongoing on how to tackle the issue while also ensuring that they remain dynamic.

The benchmark helps in remedying this by stringing together sentence fragments to formulate prompts that could generate potentially problematic outputs and assessing the specific LLMs response to these.

Additionally, the POC is also inviting suggestions for other tests that can be run, as well as potentially hazardous content that should be screened apart from the ones already mentioned. The consortium has maintained that the POC is still a work in development and will be improved upon with more community interaction.

“The v0.5 POC allows us to engage much more concretely with people from different fields and places because we believe that working together makes our safety checks even better,” said Joaquin Vanschoren, co-chair of the working group.

The post First safety benchmark released for LLMs, assesses problematic output appeared first on Analytics India Magazine.

Tech Worker Salary Growth in Australia Has Normalised

IT workers in Australia are not too far past the pandemic to remember the upward surge in salaries it resulted in. As companies ramped projects back up after shelving them during 2020, suddenly the competition for tech skills and talent was back on in earnest, and salaries spiked.

That grab for talent tapered off through 2023 as economic headwinds caused businesses to recalibrate, with some tech job losses as businesses restructured. However, there were still plenty of lucrative opportunities for tech workers, as pointed out by recruiter Hays in 2023.

While some indicators show tech salaries are stalling in 2024, this is in the context of a decade of steady growth, says Nicole Gorton, director at recruiter Robert Half. Gorton predicts salaries will hold up due to a continued tech skills shortage, especially in areas of high demand.

Salary reports point to less tech worker pay packet growth

Reports in early 2024 have suggested Australian tech worker salaries are flatlining or falling.

Software firm Employment Hero’s SME Index for February 2024 pegged a decline in the median hourly rate for tech workers in 1,500 Australian SMEs from $57.20 to $57.12 over the last year. This made tech the only sector in the country where worker pay went backward and contrasted with data for other Australian workers, which saw hourly rates rise by 6%, the firm said.

Data from Australian jobs website Seek suggests advertised salaries in information and communication technology grew just 1.9% in the 12 months to January 31. Job advertisements on the platform were down 26.9% year-on-year, according to data published for March.

This recent data may not contain the full picture; for instance, there are a lot of talented tech workers who sit outside the sample of SME users of Employment Hero’s payroll and HR platform. Also, about two thirds of all advertised open positions on Seek do not contain salary information.

Think & Grow’s Australian Tech Salary Guide is a good reminder of the difference in salary growth between different roles. Last year, its report found that, while team leaders had seen only 1.4% salary growth from 2022 to 2023, software engineers grew by a median of 18.5%.

Robert Half says salary data needs to be seen in context

Nicole Gorton, Director of Robert Half.
Nicole Gorton, Director at Robert Half.

Nicole Gorton, director at recruiter Robert Half, said Australian tech workers had seen a decade of steady salary increases, followed by an upward spike following the pandemic. Though this was followed by restructuring after “zealous aggressive hiring,” she said it is a normalisation.

“I would say not to look at the recent three years as normal. They were not. While activity has dropped off, the question is, dropped off from what? Supply and demand now is relatively healthy. Companies are hiring to replace, and sometimes hiring to add,” she said.

Robert Half’s 2024 Australia Salary Guide stated IT and technology workers could expect 4.5% salary growth this year, compared with 4.2% expected growth overall. Salaries are also still relatively healthy in comparison to where they were pre-pandemic, Gorton said.

SEE: 4 things Australian IT leaders can do to build the future tech team they want.

Employment Hero also pointed out the hourly rate for ICT workers is the highest of any sector, while Think & Grow found few jobs paid under $100,000. This puts tech salaries above the average weekly salary for full-time workers, which was $1,888.80 in November 2023.

“Though the fever pitch of inflated salaries is subsiding and rapid growth may be moderating, salaries are expected to remain healthy in my opinion, especially in areas of demand like data, cyber security, systems engineering and business transformation,” Gorton said.

The IT hiring market in 2024 contains opportunity for workers

Tech workers are now in a market where companies are “cautiously positive,” Gorton said, though there remains a “level of caution in the hiring economy.” While companies “are not all guns blazing,” they will hire depending on projects, especially in areas of high demand.

Demand stretches from junior to senior level roles and can depend on industry sectors. Robert Half’s report found in-demand roles were for developers, business analysts, cyber security specialists, data engineers, BI and data analysts, systems engineers and cloud engineers.

Gorton said businesses were looking to ICT workers to address the critical challenges faced by business today in a digital world, like leveraging data, securing systems, building robust infrastructure and bridging the gap between business needs and technology solutions.

Employers positioning themselves for artificial intelligence

Employers are also looking at how their current tech workforce positions them for adapting to an AI environment. “Companies are hiring people with those (AI) skill sets to make sure that they are staying at the top of their game and being competitive in their industry,” Gorton said.

In some ways, companies are trying to navigate the “unknown,” Gorton said.

This is causing them to be more strategic about hiring, so they are prepared and “don’t miss the boat or get caught up in a hiring frenzy. They are still trying to work out how to maintain cost structure but add weight to technology so it supports growth of business,” she added.

Australian salaries still attractive for APAC tech workers

Exceptional salary growth may have levelled off, but tech work in Australia is still desirable. And work exists; the Towards a National Jobs and Skills Roadmap report suggested this was still the case for 70% of ICT occupations, leaving plenty of opportunity for international workers.

The Australian Government is doing more to tap global tech talent. For example, its Migration Strategy includes the introduction of a Specialist Skills Pathway for highly skilled workers, in addition to new four-year Skills in Demand visas for workers in areas of skills shortage.

Gorton acknowledged existing visas can present a barrier. However, she said Australia can present an attractive proposition for APAC tech workers, as long as salaries are in line with things such as inflation rates and the cost of living, which can even vary between Australian states.

“Australia is a land of technological expansion and growth,” Gorton said. “Private equity firms are investing hard and fast in this space in Australia, and I think with our borders open it does allow people to land and expand their skill sets across geographies,” she said.

Deloitte Partners with Yotta to Accelerate GenAI Adoption in India

Deloitte Deploys NVIDIA’s DGX A100s For Its New AI Computing Centre

Deloitte India has announced a strategic alliance with Yotta Data Services to provide clients access to NVIDIA GPU computing infrastructure and help them quickly develop innovative Generative AI applications.

The alliance aims to bring world-class AI capabilities to organisations in India, where AI is expected to contribute US$500 billion to the country’s GDP by 2025 and around US$1 trillion by 2035, aligning with India’s AI mandate and the government’s recent approval of major investments to drive AI innovation.

Deloitte’s relationship with Yotta is expected to deliver multiple client benefits, including ongoing value realisation linked to business outcomes, scalable access to GPUs at optimised pricing, software and hardware optimised for AI workloads, and access to top-tier AI services and multi-disciplinary talent.

“As GenAI gains momentum, it opens doors for businesses to reinvent how work is done and unlock fresh avenues of creativity,” said Nitin Mittal, Deloitte’s Global Generative AI Leader. “Our relationship with Yotta will help our clients and people harness the power of this disruptive new technology and accelerate business innovation.”

According to a recent Deloitte survey, 77% of Indian CXOs are excited about GenAI’s transformative nature. “Our alliance with Yotta in the Indian market will help us empower our clients to navigate GenAI’s transformative landscape with confidence and seize the limitless possibilities it offers to shape the future of business,” said Vinay Prabhakar, Deloitte India’s Leader for Sales, Alliances, and Pursuit Excellence.

Sunil Gupta, Co-Founder, MD and CEO of Yotta Data Services, said, “Our scalable GPU cloud platform and AI services, combined with Deloitte’s extensive industry expertise and service offerings, will energise Indian businesses, government agencies, start-ups, GCCs, and researchers with unparalleled high-performance computing as a service and AI as a service. This will help clients with practical solutions that drive results and foster growth in today’s dynamic landscape.”

The alliance expands Deloitte’s GenAI practice’s capabilities and follows several recent AI announcements from the company, including the launch of Quartz AI, a suite of industry-specific AI service offerings built on NVIDIA platforms, and thought-leading research from the Deloitte AI Institute.

Yotta, as the first NVIDIA Network Cloud Partner in India, delivers cutting-edge GPU computing infrastructure and platforms through its Shakti Cloud, one of the largest supercomputers in the world, with more than 16,000 H100 GPUs. Yotta seeks to democratise access to GPU resources, fostering innovation and competitiveness across sectors in India.

The post Deloitte Partners with Yotta to Accelerate GenAI Adoption in India appeared first on Analytics India Magazine.

Boston Dynamics Unveils New ‘Electric Atlas’ Humanoid with Rotatable Head and Torso

Boston Dynamics Electric Atlas

Boston Dynamics, the American engineering and robotics company, yesterday, announced the suspension of development of the hydraulically actuated robot Atlas after a decade of its creation. In less than 24 hours, the company announced its new humanoid : Electric Atlas.

The New and Improved

With a face resembling a Pixar lamp, the new electric version of Atlas is stronger, more dextrous and agile. Atlas is designed to move with maximum efficiency to accomplish tasks, without being limited by human range of motion.

The upcoming electric iteration of Atlas comes with enhanced strength and a wider range of movement compared to its predecessors. The hydraulic Atlas demonstrated proficiency in lifting and navigating diverse heavy and irregular items. These capabilities are further refined to bring multiple new gripper designs to cater to various manipulation requirements anticipated in customer settings.

“Atlas may resemble a human form factor, but we are equipping the robot to move in the most efficient way possible to complete a task, rather than being constrained by a human range of motion. Atlas will move in ways that exceed human capabilities,” the company said in its blog post.

The humanoid robots are equipped with reinforcement learning and computer vision, among other AI programs, to ensure they can operate and adapt to complex real-world situations.

Real-World Applications

The use-cases for Atlas primarily revolve around the automotive industry. Hyundai has invested in Atlas and will develop the next generation of automotive manufacturing capabilities, providing an ideal environment to test new Atlas applications.

Boston Dynamics will look to showcase its potential in various settings including laboratories, factory floors and daily life.

Interestingly, Figure 01, the humanoid robot from Figure AI is being tested on doing house chores including making coffee, and arranging the dishes. The robot can also converse with humans. While there is no information available regarding Atlas’s ability to talk or work, the era of humanoid robots has only just begun.

The post Boston Dynamics Unveils New ‘Electric Atlas’ Humanoid with Rotatable Head and Torso appeared first on Analytics India Magazine.

Brave Search is adopting AI to answer your queries

Brave Search is adopting AI to answer your queries Ivan Mehta 8 hours

Privacy-focused search engine Brave announced Wednesday that it is revamping its answer engine to return AI-powered synthesized answers. The new feature is available to users across the globe.

The new “Answer with AI” feature returns neatly formatted answers for questions like “People who walked on the moon,” “List of all actors who played Batman” or “How do descale Nespresso pixie.” It can also summarize reviews and salient points of a restaurant, for example.

Brave Answer with AI list of batman directors

Image Credits: Brave

The company launched an AI-powered summarization feature in March 2023. The startup said that its new AI-powered search is a big upgrade of that.

Brave said that informational queries, such as the one listed about the new answer engine, will automatically rely on AI to present information in a summarized format. For other queries, users can trigger an AI search manually.

Image Credits: Brave

The company, which switched to using its own index for search queries last year, said that its “Answer with AI” feature uses a combination of large language models (LLMs) and reliable data. Brave said it uses a combination of Mixtral 8x7B and Mistral 7B as primary models along with custom transformer models for semantic matching and answering.

“The user only needs to enter a query as they are used to doing with a regular search engine. The query will then be internally converted to an LLM prompt using the data from search results as context to the prompt, with typical RAG (retrieval augmented generation),” the company’s head of search, Josep Pujol, told TechCrunch via email.

Image Credits: Brave

Multiple reports have pointed out that AI-powered search could have grave effects on the future of the web. Brave, which severs over 10 billion queries a year, said that while users are demanding AI-augmented answers and new methods of content consumption, the company is aware that this approach might detrimental to publishers putting out content on the web.

“This challenge is not unique to Brave Search but present across most AI-powered answer engines and chatbots, premium or open. Brave, as both a browser and search engine, is aware of these challenges. Consequently, we will be monitoring and quantifying the impact of AI-generated content on site visits, and eventually will address the disruptions that the drop in traffic could cause,” the company said.

Other search engines, such as Google and Bing, have also adopted AI-powered answers through different experiments. Meanwhile, startups like Perplexity and You.com are also vying to be the answer engine of choice for users.

Keyboards will Soon Become Obsolete 

Keyboards will Soon Become Obsolete

Entrepreneur and investor Naval Ravikant recently re-launched his social media app, Airchat, when there was no dearth of such platforms already. However, the USP of this one is that the app is completely voice-centric – the interaction is via voice only.

Source: X

Airchat might just be the latest entrant highlighting the power of voice, but a number of recent AI platforms and devices have already brought voice as a predominant user interface.

Dawn of the Voice Era

Multimodal AI was identified as one of Microsoft’s AI trends for the year, and going by the AI developments this year, voice modality is emerging as a key feature.

The latest Humane Ai Pin, a small wearable device that performs as a personal assistant and even works as a probable replacement for a smartphone, essentially works on voice. Any type of interaction with the device like making calls, reading messages, and clicking pictures can be executed through voice commands.

Bethany Bongiorno, co-founder of Humane, believes that voice will be an integral part of an AI future. “Voice-first in an AI future,” she said. Similarly, AI devices such Rabbit R1, a pocket-size gadget which is an integrated language action model, also operates on voice commands.

Brett Adcock, CEO and founder of robotics company Figure AI, said, “We believe the default user interface for the robot is speech. You’re going to want to talk to the robot. Even in an industrial setting, when you’re unboxing the robot for the first time, we think the initialisation process is speech.”

How Do We Assess Them?

With voice models come a different set of evaluation parameters. Interestingly, the shift has started after a year-long emphasis on text-based AI generation.

Benchmarks and leaderboards for evaluating text-based models have always been a topic of LLM discussions. The need has also given rise to an Indic LLM Leaderboard. However, leaderboards for voice-based generative models are not that prominent.

There are evaluation parameters for voice-based models such as latency, word error rate (WER), short-time objective intelligibility (STOI), miss-rate, and ROC Curve. These parameters measure accuracy in terms of speech quality and speech intelligibility.

Shift from Text to Voice

Chatbots that aid multiple functions such as HR operations or finding love, are essentially text-based. However, there is a casual shift now.

Last month, Hume AI released EVI, an empathetic voice interface AI model. Users can converse with the model normally, where the model will be able to analyse and understand a user’s emotion based on the tone of the voice and other features. It almost serves as a therapist.

Hume comes as a huge shift from other similar platforms such as Inflection’s Pi, which acts as an emotionally intelligent AI that helps with one’s emotional needs.

Not All Big Tech are Gung-ho

While the big tech companies are integrating voice in one form or another, be it OpenAI’s ChatGPT or Google’s Gemini, the models are multimodal allowing voice interface as a normal mode. Interestingly, a major player, Apple, is not too keen on this form of modality yet.

For a company making strides in bringing generative AI features to its phone, and even releasing AI models such as RealM that could possibly beat GPT-4, Apple is yet to catch up on the voice game.

However, voice is not completely alien to Apple. Apple’s spatial computing device, Apple Vision Pro can be controlled using voice features.

Further, the company’s famed voice assistant, Siri, is expected to get advanced AI features which will probably be announced at the Apple WWDC 2024 event in June. The feature might be a major boost to Apple’s voice modality function.

While voice is being increasingly adopted, companies are still relying on text-based chatbots. IT company, Happiest Minds recently announced ‘hAPPI’, a generative AI-powered chatbot that will converse with users on health and wellness-related queries.

It is obvious that to get to the closest level-of human-like interaction, voice becomes indispensable. After all, “Humans are all meant to get along with other humans, it just requires the natural voice,” said Ravikant.

PS: The story was written using a keyboard.

The post Keyboards will Soon Become Obsolete appeared first on Analytics India Magazine.

How India Will Continue to be AI’s Wild West

Prime Minister Narendra Modi recently acknowledged AI’s contribution in drafting the 2024 BJP manifesto.

“I contacted all the universities, different NGOs, and about 15-20 lakh people gave their inputs. Then I took the help of AI and classified them subject-wise,” he said in a recent interview, on the thought process behind the manifesto.

He said that his team made ample use of AI, specifically in trying to ascertain how achievable their goals were, stating, “As soon as the elections are over, it will be sent to the states. I would like the states to work on it.”

The prime minister’s party, BJP, has cemented its position as being more tech-inclined than most parties in the country, maybe even the world, and they aren’t hesitant about taking full credit for the industry surrounding AI in India.

In another interview, Union finance minister Nirmala Sitharaman said that the sole reason for India’s move towards Industry 4.0 was the government laying the groundwork for the same over the past five years.

“India’s industry will move to the 4.0 phase on the back of several steps that have been taken between 2019 and 2024, such as giving an emphasis for artificial intelligence, setting up new centres of excellence, making sure 6G capability is brought in India completely indigenously, and 5G being successfully rolled out in this country,” she said.

However, it is well-known that while India’s industry is booming, the infrastructure and government involvement is still lacking, especially when encouraging indigenous innovations.

While AI has stagnated as a novelty amongst the general Indian population, advancements across the world seem to originate from India, but primarily from its private sector. What has become a growing need, however, is an underlying and central framework to support these developments.

So, is it likely that this framework will come in the next few years, or will the Indian government continue taking a backseat regardless of a regime change?

What Offerings Do the Parties Bring to the Table?

Amidst much finger-pointing and accusations of AI-use during campaigning, both major parties released their manifestos.

Needless to say, neither party had much to say about how they plan on growing the industry. At least not more than what they’ve said already. While in their previous manifesto, BJP made bold claims of pushing for Industry 5.0 through specific AI and quantum missions, their implementations are lacklustre at best.

Despite a booming industry, the government has lacked in introducing overarching frameworks. Major changes have only come to fruition in minor and niche ways like initiatives affecting thousands of people in a country of billions, with the closest overarching initiative being IndiaAI, or novelty uses like NaMo AI.

With their newest manifesto, the party has promised less, with a vague assurance to “develop a comprehensive ecosystem” and focus on indigenous development. Though, whether this will continue to be developed through use of already existing models, or through India-made models is uncertain.

Meanwhile, the new Indian National Congress’ manifesto is similarly vague and underwhelming, promising to “support the use of Artificial Intelligence”, while ensuring job opportunities in sectors that use “conventional technology”.

In 2019, INC proposed initiatives like strengthening patent laws to protect Indian inventors and enhancing India Inclusive Innovation Fund, potentially giving a leg up to AI innovators.

But, much like the BJP manifesto, it’s hard to tell whether these plans would have been implemented or not. And if they did, we can’t be sure if they’d be done sincerely or just to tick off a list of overpromising manifesto points.

Increased Interest May Not Necessarily Be Good

While Indians have managed to pull in many firsts within the industry, it’s unsurprising that India was not the first to propose a national framework.

Union minister for IT and entrepreneurship Rajeev Chandrasekhar admitted that something needed to be done late last year. “We need a global framework urgently because, in the next 6-9 months, AI will take shape and evolve in a way that we may not anticipate or fully understand,” he said.

However, with major players within the EU, US and Australia already pushing for frameworks or having established their own, it seems that India is lagging far behind, at least at the government level.

Whether this spells something good or bad for the industry as a whole, may be a different discussion altogether.

With no framework in place, India could be putting its own innovators at a disadvantage while also encouraging outside players to enter the ecosystem to take advantage of India’s laxity as conversations on AI regulation becomes more serious around the world.

The post How India Will Continue to be AI’s Wild West appeared first on Analytics India Magazine.

The State of Data Engineering in India: 2024

Data engineers are primarily responsible for the collection, storage, and analysis of the data. They are responsible for converting the data that can be used by the data scientists, for the analysis of the dataset. They are also responsible for data collection, data transformation, data storage, data integration, and automation of the pipeline of the data.

With the advent of digital technologies, a significantly large volume of data is being generated. The data is generated through several sources including sensors, transactions, social media, and others. Data engineers play a vital role in managing the data and ensuring its quality and integrity. Data Engineers are the professionals who create pipelines to transform this data into formats that data scientists can read and use to further analyze the datasets. This data pipeline entails collecting data from various sources and storing it in a single warehouse, where it will be represented uniformly. The first section of the report provides an industry overview of the data engineers in India. It highlights several aspects such as salaries and number of the data engineers across several sectors.

The next section provides an overview of job openings for data engineers, and also provides a detailed description of the information on the number of job openings by sector, by salary bracket, in different sectors, and based on the size of the company. Furthermore, it also highlights the job openings by work models, and by company size. The report also covers information on the number of data engineers on the basis of geography, roles, skills required for data engineers and on the basis of the work models. Moreover, the report also provides a detailed analysis on the attrition rates for data engineers across India. It also projects a picture on the attrition rates of data engineers. Furthermore, the report highlights the key skills of data engineers in the present market. The report concludes with the distribution of data engineers by different roles in India.

Key take away: Generative AI is revolutionizing data engineering by significantly accelerating the transformation of raw data into actionable insights.

For previous year’s reports:

2022 | 2023

Key Findings

  • Senior Talent Demand: The proportion of data engineers with 6+ years of experience increased significantly across various sectors, rising from 27% in 2023 to 38% in 2024. This suggests a growing demand for senior-level talent in the field.
  • Salary Growth: There has been a notable increase in the number of data engineers earning between ₹6-10 lakhs, which escalated from 24% in 2023 to 32.4% in 2024, marking an increase of approximately 8.4%.
  • Job Availability: While precise figures are challenging to pin down, there were 10,593 job openings for data engineers across all industries found on online job portals.
  • Attrition Rates: The overall attrition rate in companies decreased from 18% in 2023 to around 14% in 2024, indicating improved retention strategies and possibly better job satisfaction or market conditions.
  • Sector-Specific Employment: In the non-IT sectors, the BFSI sector accounted for 49.3% of data engineer employment in 2024, highlighting a significant concentration of data engineering roles in this industry.

Read the complete report here:

The post The State of Data Engineering in India: 2024 appeared first on Analytics India Magazine.

Build a Command-Line App with Python in 7 Easy Steps

Build a Command-Line App with Python in 7 Easy Steps
Image by Author

Building simple projects is a great way to learn Python and any programming language in general. You can learn the syntax to write for loops, use built-in functions, read files, and much more. But it’s only when you start building something that you actually “learn”.

Following the “learning by building” approach, let’s code a simple TO-DO list app that we can run at the command line. Along the way we’ll explore concepts like parsing command-line arguments and working with files and file paths. We’ll as well revisit basics like defining custom functions.

So let’s get started!

What You’ll Build

By coding along to this tutorial, you’ll be able to build a TO-DO list app you can run at the command line. Okay, so what would you like the app to do?

Like TO-DO lists on paper, you need to be able to add tasks, look up all tasks, and remove tasks (yeah, strikethrough or mark them done on paper) after you’ve completed them, yes? So we’ll build an app that lets us do the following.

Add tasks to the list:

Build a Command-Line App with Python in 7 Easy Steps
Image by Author

Get a list of all tasks on the list:

Build a Command-Line App with Python in 7 Easy Steps
Image by Author

And also remove a task (using its index) after you’ve finished it:

Build a Command-Line App with Python in 7 Easy Steps
Image by Author

Now let’s start coding!

Step 1: Get Started

First, create a directory for your project. And inside the project directory, create a Python script file. This will be the main file for our to-do list app. Let's call it todo.py.

You don't need any third-party libraries for this project. So only make sure you’re using a recent version of Python. This tutorial uses Python 3.11.

Step 2: Import Necessary Modules

In the todo.py file, start by importing the required modules. For our simple to-do list app, we'll need the following:

  • argparse for command-line argument parsing
  • os for file operations

So let’s import them both:

import argparse  import os

Step 3: Set Up the Argument Parser

Recall that we’ll use command-line flags to add, list, and remove tasks. We can use both short and long options for each argument. For our app, let’s use the following:

  • -a or --add to add tasks
  • -l or --list to list all tasks
  • -r or --remove to remove tasks using index

Here’s where we’ll use the argparse module to parse the arguments provided at the command line. We define the create_parser() function that does the following:

  • Initializes an ArgumentParser object (let’s call it parser).
  • Adds arguments for adding, listing, and removing tasks by calling the add_argument() method on the parser object.

When adding arguments we add both the short and long options as well as the corresponding help message. So here’s the create_parser() function:

def create_parser():      parser = argparse.ArgumentParser(description="Command-line Todo List App")      parser.add_argument("-a", "--add", metavar="", help="Add a new task")      parser.add_argument("-l", "--list", action="store_true", help="List all tasks")      parser.add_argument("-r", "--remove", metavar="", help="Remove a task by index")      return parser

Step 4: Add Task Management Functions

We now need to define functions to perform the following task management operations:

  • Adding a task
  • Listing all tasks
  • Removing a task by its index

The following function add_task interacts with a simple text file to manage items on the TO-DO list. It opens the file in the ‘append’ mode and adds the task to the end of the list:

def add_task(task):      with open("tasks.txt", "a") as file:      file.write(task + "n")

Notice how we’ve used the with statement to manage the file. Doing so ensures that the file is closed after the operation—even if there’s an error—minimizing resource leaks.

To learn more, read the section on context managers for efficient resource handling in this tutorial on writing efficient Python code.

The list_tasks function lists all the tasks by checking if the file exists. The file is created only when you add the first task. We first check if the file exists and then read and print out the tasks. If there are currently no tasks, we get a helpful message. :

def list_tasks():      if os.path.exists("tasks.txt"):          with open("tasks.txt", "r") as file:              tasks = file.readlines()          	for index, task in enumerate(tasks, start=1):                  print(f"{index}. {task.strip()}")      else:          print("No tasks found.")

We also implement a remove_task function to remove tasks by index. Opening the file in the ‘write’ mode overwrites the existing file. So we remove the task corresponding to the index and write the updated TO-DO list to the file:

def remove_task(index):      if os.path.exists("tasks.txt"):          with open("tasks.txt", "r") as file:              tasks = file.readlines()          with open("tasks.txt", "w") as file:              for i, task in enumerate(tasks, start=1):                  if i != index:                      file.write(task)          print("Task removed successfully.")      else:          print("No tasks found.")

Step 5: Parse Command-Line Arguments

We’ve set up the parser to parse command-line arguments. And we’ve also defined the functions to perform the tasks of adding, listing, and removing tasks. So what’s next?

You probably guessed it. We only need to call the correct function based on the command-line argument received. Let’s define a main() function to parse the command-line arguments using the ArgumentParser object we’ve created in step 3.

Based on the provided arguments, call the appropriate task management functions. This can be done using a simple if-elif-else ladder like so:

def main():      parser = create_parser()      args = parser.parse_args()        if args.add:          add_task(args.add)      elif args.list:          list_tasks()      elif args.remove:          remove_task(int(args.remove))      else:          parser.print_help()    if __name__ == "__main__":      main()

Step 6: Run the App

You can now run the TO-DO list app from the command line. Use the short option h or the long option helpto get information on the usage:

$ python3 todo.py --help  usage: todo.py [-h] [-a] [-l] [-r]    Command-line Todo List App    options:    -h, --help  	show this help message and exit    -a , --add  	Add a new task    -l, --list  	List all tasks    -r , --remove   Remove a task by index

Initially, there are no tasks in the list, so using --list to list all tasks print out “No tasks found.”:

$ python3 todo.py --list  No tasks found.

Now we add an item to the TO-DO list like so:

$ python3 todo.py -a "Walk 2 miles"

When you list the items now, you should be able to see the task added:

$ python3 todo.py --list  1. Walk 2 miles

Because we’ve added the first item the tasks.txt file has been created (Refer to the definition of the list_tasks function in step 4):

$ ls  tasks.txt  todo.py

Let's add another task to the list:

$ python3 todo.py -a "Grab evening coffee!"

And another:

$ python3 todo.py -a "Buy groceries"

And now let’s get the list of all tasks:

$ python3 todo.py -l  1. Walk 2 miles  2. Grab evening coffee!  3. Buy groceries

Now let's remove a task by its index. Say we’re done with evening coffee (and hopefully for the day), so we remove it as shown:

$ python3 todo.py -r 2  Task removed successfully.

The modified TO-DO list is as follows:

$ python3 todo.py --list  1. Walk 2 miles  2. Buy groceries

Step 7: Test, Improve, and Repeat

Okay, the simplest version of our app is ready. So how do we take this further? Here are a few things you can try:

  • What happens when you use an invalid command-line option (say -w or --wrong)? The default behavior (if you recall from the if-elif-else ladder) is to print out the help message but there’ll be an exception, too. Try implementing error handling using try-except blocks.
  • Test your app by defining test cases that include edge cases. To start, you can use the built-in unittest module.
  • Improve the existing version by adding an option to specify the priority for each task. Also try to sort and retrieve tasks by priority.

▶️ The code for this tutorial is on GitHub.

Wrapping Up

In this tutorial, we built a simple command-line TO-DO list app. In doing so, we learned how to use the built-in argparse module to parse command-line arguments. We also used the command-line inputs to perform corresponding operations on a simple text file under the hood.

So where do we go next? Well, Python libraries like Typer make building command-line apps a breeze. And we’ll build one using Typer in an upcoming Python tutorial. Until then, keep coding!

Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she's working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.

More On This Topic

  • Data Science at the Command Line: The Free eBook
  • 5 More Command Line Tools for Data Science
  • ChatGPT CLI: Transform Your Command-Line Interface Into ChatGPT
  • Master The Art Of Command Line With This GitHub Repository
  • Build An AI Application with Python in 10 Easy Steps
  • Build a Machine Learning Web App in 5 Minutes

Quora’s Poe Eats Google’s Lunch

Poe, the AI chatbot platform by Quora, recently introduced a new feature – multi-bot chat – which enables users to engage with several AI models concurrently within a single conversation thread. This comes as a boon for people who wish to chat with multiple models in a single go.

Today we are adding an important new capability to Poe: multi-bot chat. This feature lets you easily chat with multiple models in a single thread. (1/n) pic.twitter.com/jl9q6dCDmh

— Adam D'Angelo (@adamdangelo) April 15, 2024

This capability has two key components: context-aware recommendations with bots to compare answers to your query and the ability to call any bot on Poe into your chat simply by @-mentioning it. This lets you easily compare results from various bots and discover optimal combinations of models to use the best tool for each step in a workflow.

For instance, a user could leverage GPT-4 for analysis, Claude for creative writing, and DALL-E 3 for image generation — all within one thread. Poe aims to streamline how people find the optimal combination of bots for their needs.

What about Perplexity?

Source – Reddit

Poe was the first AI chatbot platform to allow users to chat from a list of LLMs. Perplexity, which also follows a similar mode, came into the picture much later. Poe offers a comprehensive platform where users can engage with multiple AI models seamlessly.

Perplexity on the other hand, excels as a potent search engine driven by large language models. With Perplexity, we have to go to different models to access it, but with Poe’s new advancement accessing different models can be done in one thread.

Poe’s features include AI chat, model selection, and the integration of multiple models, ensuring a diverse and personalised experience. Additionally, Poe empowers users to craft their own chatbots, leveraging existing models as a base for personalisation and experimentation. Meanwhile, Perplexity features search, answering questions, and access to advanced language models. It enhances users’ ability to obtain relevant information efficiently.

Perplexity as a potent search engine helps while browsing the internet. If something is not clear, a feature or anything else, one can simply take a screenshot and ask Perplexity about it.

The AI-powered answer engine has been in a quest to establish itself as a Google alternative.

Here’s a sample query – look at the sources used by Perplexity and the top search results by Google and how identical they are. The UX is better but not fantastic as this here is just feeding search results into a LLM and generating a response.
Neither of these have… https://t.co/Kq5DndegQl pic.twitter.com/GLZTjT1GpK

— Varun (@varun_mathur) January 13, 2024

Quora marking up the future

A Reddit conversation appears, “If you’re solely concerned about GPT-4 and Claude, just focus on Perplexity. While both have similar writing capabilities, Perplexity stands out for its ability to search the internet, a feature Poe lacks.”

With the ongoing debates on which option to select, Quora anticipates Poe evolving into a valuable platform for diverse applications, aiming to bridge this divide and significantly streamlining the effort required for AI developers to engage a broad user base, as noted by Adam D’Angelo.

We are in the process of making an API that will make it easy for any AI developer to plug their model into Poe. (11/n)

— Adam D'Angelo (@adamdangelo) February 3, 2023

The latest breakthrough in Poe captivates users, enticing them to embrace it instantly. If Quora continues to pioneer novel pathways, offering consistently engaging user experiences, it stands a chance at establishing itself as the quintessential platform for the generative AI era.

Source – Reddit

The post Quora’s Poe Eats Google’s Lunch appeared first on Analytics India Magazine.