Microsoft Acquires 48 Acres of Lands in Hyderabad for Building Data Centres

Microsoft Data Centre

Microsoft has purchased 48 acres of land in Hyderabad, worth INR 267 crore, with plans to expand its data centre business in India. The land was acquired from Sai Balaji Developers, and the chosen property is approximately 40 km away from Hyderabad’s main city.

According to reports, Microsoft intends to develop one of the largest data centres in the country as part of its expansion plans. This recent acquisition complements Microsoft’s existing network of data centre regions in Pune, Mumbai, and Chennai, which have been operational for the past five years.

Additionally, Microsoft has acquired two more land parcels in Hyderabad to support its data centre business expansion. The company has been progressively expanding its footprint in the flexible office space sector. In Hyderabad, Microsoft operates its India Development Centre, serving as a key campus alongside its locations in Bengaluru and Noida.

In response to India’s growing demand for data centres, several major players have announced significant investments and expansion plans.

Sunil Gupta, MD, and co-founder of Yotta Data Services revealed plans for five new data centre projects, including two in Mumbai, one in GIFT City, Gujarat, one in Chennai, and a 30MW campus in Dhaka, Bangladesh, aiming to explore global markets beyond India.

The surge in data centre investments includes commitments from Sify, Atlassian, and AWS. Sify has pledged over $360 million, Atlassian plans to establish data centres across India, and AWS is investing $12.7 billion to expand its data centre presence in the country.

Kotak Alternate Assets also announced plans to invest $800 million in developing 5-7 large data centres across India’s key property markets, targeting locations such as Mumbai, Chennai, Noida, and Hyderabad.

The surge in demand for data centres is fueled by the push for data localization, driven by India’s data protection norms and proposed data centre policy. The government’s introduction of “trusted geographies,” limiting cross-border data storage, has intensified the demand for data centres in India, attracting global data majors to invest in the country.

In response to this trend, Adani, in partnership with EdgeConneX, has raised $213 million to advance its data centre initiatives, aiming to become a leading data centre operator by 2030, with a capacity augmentation plan of 1 GW.

India’s data centre market is booming, with Nasscom predicting global investments to reach $200 billion annually by 2025. India is expected to draw around $5 billion annually within two years.

The post Microsoft Acquires 48 Acres of Lands in Hyderabad for Building Data Centres appeared first on Analytics India Magazine.

Understanding Python’s Iteration and Membership: A Guide to __contains__ and __iter__ Magic Methods

A Guide to __contains__ and __iter__ Magic Methods
Image by Author

If you're new to Python, you may have come across the terms "iteration" and "membership" and wondered what they mean. These concepts are fundamental to understanding how Python handles collections of data, such as lists, tuples, and dictionaries. Python employs special dunder methods to enable these functionalities.

But what exactly are dunder methods? Dunder/Magic methods are special methods in Python that start and end with a double underscore, hence the name "dunder." They are used to implement various protocols and can be used to perform a wide range of tasks, such as checking membership, iterating over elements, and more. In this article, we will be focusing on two of the most important dunder methods: __contains__ and __iter__. So, let's get started.

Understanding Pythonic Loops with Iter Method

Consider a basic implementation of a file directory using Python classes as follows:

class File:  	def __init__(self, file_path: str) -> None:      	    self.file_path = file_path     	   class Directory:  	def __init__(self, files: List[File]) -> None:      	    self._files = files  

A straightforward code where the directory has an instance parameter that contains a list of File objects. Now, if we want to iterate over the directory object, we should be able to use a for loop as follows:

directory = Directory(  	files=[File(f"file_{i}") for i in range(10)]  )  for _file in directory:  	print(_file)  

We initialize a directory object with ten randomly named files and use a for loop to iterate over each item. Simple enough, But whoops! You get an error message: TypeError: 'Directory' object is not iterable.

What went wrong? ​​Well, our Directory class isn't set up to be looped through. In Python, for a class object to become iterable, it must implement the __iter__ dunder method. All iterables in Python like List, Dictionaries, and Set implement this functionality so we can use them in a loop.

So, to make our Directory object iterable, we need to create an iterator. Think of an iterator as a helper that gives us items one by one when we ask for them. For example, when we loop over a list, the iterator object will provide us with the next element on each iteration until we reach the end of the loop. That is simply how an iterator is defined and implemented in Python.

In Python, an iterator must know how to provide the next item in a sequence. It does this using a method called __next__. When there are no more items to give, it raises a special signal called StopIteration to say, "Hey, we're done here." In the case of an infinite iteration, we do not raise the StopIteration exception.

Let us create an iterator class for our directory. It will take in the list of files as an argument and implement the next method to give us the next file in the sequence. It keeps track of the current position using an index. The implementation looks as follows:

class FileIterator:      def __init__(self, files: List[File]) -> None:          self.files = files          self._index = 0            def __next__(self):          if self._index >= len(self.files):          	raise StopIteration          value = self.files[self._index]          self._index += 1          return value

We initialize an index value at 0 and accept the files as an initialization argument. The __next__ method checks if the index overflows. If it is, it raises a StopIteration exception to signal the end of the iteration. Otherwise, it returns the file at the current index and moves to the next one by incrementing the index. This process continues until all files have been iterated over.

However, we are not done yet! We have still not implemented the iter method. The iter method must return an iterator object. Now that we have implemented the FileIterator class, we can finally move towards the iter method.

class Directory:      def __init__(self, files: List[File]) -> None:          self._files = files            def __iter__(self):          return FileIterator(self._files)

The iter method simply initializes a FileIterator object with its list of files and returns the iterator object. That's all it takes! With this implementation, we can now loop over our Directory structure using Python's loops. Let's see it in action:

  directory = Directory(  	files=[File(f"file_{i}") for i in range(10)]  )  for _file in directory:  	print(_file, end=", ")    # Output: file_0, file_1, file_2, file_3, file_4, file_5, file_6, file_7, file_8, file_9,

The for loop internally calls the __iter__ method to display this result. Although this works, you might still be confused about the underlying workings of the iterator in Python. To understand it better, let's use a while loop to implement the same mechanism manually.

directory = Directory(  	files=[File(f"file_{i}") for i in range(10)]  )    iterator = iter(directory)  while True:      try:          # Get the next item if available. Will raise StopIteration error if no item is left.          item = next(iterator)             print(item, end=', ')      except StopIteration as e:          break   # Catch error and exit the while loop    # Output: file_0, file_1, file_2, file_3, file_4, file_5, file_6, file_7, file_8, file_9,

We invoke the iter function on the directory object to acquire the FileIterator. Then, we manually utilize the next operator to invoke the next dunder method on the FileIterator object. We handle the StopIteration exception to gracefully terminate the while loop once all items have been exhausted. As expected, we obtained the same output as before!

Testing for Membership with Contains Method

It is a fairly common use case to check for the existence of an item in a collection of objects. For example in our above example, we will need to check if a file exists in a directory quite often. So Python makes it simpler syntactically using the "in" operator.

print(0 in [1,2,3,4,5]) # False  print(1 in [1,2,3,4,5]) # True

These are majorly used with conditional expressions and evaluations. But what happens if we try this with our directory example?

print("file_1" in directory)  # False  print("file_12" in directory) # False

Both give us False, which is incorrect! Why? To check for membership, we want to implement the __contains__ dunder method. When it is not implemented, Python fall backs to using the __iter__ method and evaluates each item with the == operator. In our case, it will iterate over each item and check if the “file_1” string matches any File object in the list. Since we're comparing a string to custom File objects, none of the objects match, resulting in a False evaluation

To fix this, we need to implement the __contains__ dunder method in our Directory class.

class Directory:      def __init__(self, files: List[File]) -> None:          self._files = files            def __iter__(self):          return FileIterator(self._files)            def __contains__(self, item):          for _file in self._files:          	# Check if file_path matches the item being checked          	if item == _file.file_path:              	return True      	return False

Here, we change the functionality to iterate over each object and match the file_path from the File object with the string being passed to the function. Now if we run the same code to check for existence, we get the correct output!

directory = Directory(  	files=[File(f"file_{i}") for i in range(10)]  )    print("file_1" in directory)	# True  print("file_12" in directory) # False  

Wrapping Up

And that’s it! Using our simple directory structure example, we built a simple iterator and membership checker to understand the internal workings of the Pythonic loops. We see such design decisions and implementations fairly often in production-level code and using this real-world example, we went over the integral concepts behind the __iter__ and __contains__ methods. Keep practicing with these techniques to strengthen your understanding and become a more proficient Python programmer!

Kanwal Mehreen Kanwal is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook "Maximizing Productivity with ChatGPT". As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She's also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.

More On This Topic

  • 11 Python Magic Methods Every Programmer Should Know
  • Jupyter Notebook Magic Methods Cheat Sheet
  • Introduction to __getitem__: A Magic Method in Python
  • Python f-Strings Magic: 5 Game-Changing Tricks Every Coder Needs to Know
  • A Step by Step Guide to Reading and Understanding SQL Queries
  • Understanding Classification Metrics: Your Guide to Assessing Model…

The road to democratized AI with Kwaai

Discussion with Reza Rassool and Cam Geer

The road to democratized AI with Kwaai

For our 5th episode of the AI Think Tank Podcast, I had the pleasure of joining two key figures in the field of data democratization and its intersection with AI: Reza Rassool, founder of Kwaai, and Cam Geer, Co-Founder and COO of Cryptid Technologies. Throughout our conversation, we discussed Kwaai’s bold efforts to make AI accessible and beneficial for everyone, unpacking the complexities and transformative potential of their initiatives.

Reza shared compelling insights from his rich background, particularly his transition from media streaming innovation at Real Networks to retiring to spearhead the creation of Kwaai. He passionately outlined Kwaai’s mission, which centers on crafting a personal AI operating system designed to empower individuals rather than serve corporate interests. This system emphasizes personalization, security, and independence from dominant cloud services to safeguard user privacy. Reza expressed this vision poignantly, stating:

“Our goal is to redefine the interaction between technology and personal data. We envision an AI that is not just a tool used by the masses but an extension of the individual, respecting their privacy and enhancing their autonomy.”

Where things are heading

We explored the multifaceted approach Kwaai takes towards AI development, which includes software development, fundamental AI research, and the formulation of ethical policies. Reza elaborated on the necessity of this diverse approach to ensure that AI development aligns with human-centric values and operates transparently and ethically. This is an Open Source effort with the practical needs of many different types of individuals in mind.

Cam Geer added a vital dimension to our discussion, emphasizing the critical role of data provenance in the integrity and utility of AI systems. His company, Cryptid Technologies, focuses on securing the lineage of data utilized in AI, ensuring its origin and manipulations are transparent and verifiable. This aspect is increasingly crucial as AI permeates more personal and creative domains. Cam succinctly captured the essence of his work with a notable quote:

“At Cryptid, we strive to create an environment where data does not just exist but tells a story of its authenticity. By securing data provenance, we ensure that the foundations of AI are built on trust and transparency, which are essential for its ethical application.”

The road to democratized AI with Kwaai

The potential societal impacts of democratized AI dominated much of our conversation. Both Reza and Cam shared a vision of the future where AI acts as a personal assistant that enhances individual productivity and decision-making without infringing on privacy or autonomy. They discussed the broader implications for industry standards and regulatory frameworks, advocating for a community-driven approach to AI development. This, they argued, would ensure the technology is shaped by diverse contributions, reflecting a wide array of needs and perspectives.

The Open Source and Supporters

Throughout our talk, the commitment of our guests to an open-source collaboration in AI development was evident. They believe that such collaborative efforts are crucial for fostering innovation in AI technology and ensuring that it benefits the whole of society, not just a select few. As we delved into these discussions, the complexity of democratizing AI became apparent—balancing technological advancement with ethical considerations is a formidable challenge, yet one that holds immense potential for societal benefit.

Along with these efforts, Kwaai has garnered the support and sponsorship from companies and groups such as: ArkusNexus, Chase, Salesforce, Wiley, College of the Canyons, Social Linux Expo, DataConLA, Ai4, and the AI Think Tank Podcast. Our members also include great minds such as Doc Searls, Bruce Schneier, and Larry Namer. With no shortage of talent or variety of perspectives, Kwaai is making progress with its initiatives and goals on a daily basis.

In closing, our conversation not only highlighted the pioneering work of Kwaai and Cryptid Technologies but also underscored the transformative power of AI when guided by principles of openness, security, and personal empowerment. For those inspired by our discussion or interested in participating in this transformative endeavor, I encourage engaging with Kwaai’s projects and community forums.

The road to democratizing AI is fraught with challenges but also rich with opportunities for significant societal impact. It requires a multidisciplinary approach, combining technical prowess with ethical foresight and regulatory wisdom, to ensure that AI technologies developed are truly beneficial for all sectors of society. As Reza poignantly noted:

“AI can either be a tool that enhances our lives or a tool that enhances control over our lives. The choice lies in how we develop, deploy, and govern these technologies.”

Our dialogue serves as a reminder of the power of collective effort in shaping the future of technology, a future where AI serves humanity universally and justly. As we continue these critical conversations, the path to a more equitable tech landscape becomes increasingly clear, marked by collaboration, innovation, and a steadfast commitment to the public good.

Subscribe to the AI Think Tank Podcast on YouTube.
Would you like to join the show as a live attendee and interact with guests? Contact Us

OpenAI says it’s building a tool to let content creators ‘opt out’ of AI training

OpenAI says it’s building a tool to let content creators ‘opt out’ of AI training Kyle Wiggers 7 hours

OpenAI says it’s developing a tool to let creators better control how their content is used in AI.

Called Media Manager, the tool — once it’s released — will allow creators and content owners to identify their works to OpenAI and specify how they want those works to be included or excluded from AI research and training. The goal is to have the tool in place by 2025, OpenAI says, as the company works with creators, content owners and regulators toward a common standard.

“This will require cutting-edge machine learning research to build a first-ever tool of its kind to help us identify copyrighted text, images, audio and video across multiple sources and reflect creator preferences,” OpenAI writes in a blog post. “Over time, we plan to introduce additional choices and features.”

It’d seem Media Manager, whatever form it ultimately takes, is OpenAI’s response to growing criticism of its approach to developing AI, which relies heavily on scraping public data available on the web. Most recently, eight prominent U.S. newspapers including the Chicago Tribune and Orlando Sentinel sued OpenAI for copyright infringement relating to the companies’ use of generative AI tech.

SIMA: Scaling Up AI Agents Across Virtual Worlds for Diverse Applications

Discover how Google DeepMind's Scalable Instructable Multiworld Agent (SIMA) revolutionizes AI technology, with human-like adaptability

Amidst swift advancements in Artificial Intelligence (AI), Google DeepMind’s Scalable Instructable Multiworld Agent (SIMA) represents a substantial advancement. This innovative AI agent is engineered to perform tasks within many 3D virtual environments, demonstrating exceptional adaptability and learning capabilities like human cognition.

The emergence of AI agents like SIMA is pivotal in virtual domains. As these environments become more complex and accurate to life, the necessity for intelligent agents that can facilitate authentic user interactions intensifies. The SIMA agent is not just a character in a game; it is thoroughly designed to fulfill objectives, adjust to fluctuating conditions, and exhibit behaviors that intensify virtual environments' realism and immersive nature.

The Evolution of SIMA

Initially envisioned as a leap beyond conventional AI agents confined to single games, SIMA was designed to be a multifaceted agent capable of traversing and learning within various virtual worlds. Google DeepMind recognized the potential of dynamic video game environments as a rich ground for AI advancement and thus initiated the SIMA project.

The team began with Atari games but then aimed for a more ambitious goal of creating an AI that could handle tasks across different gaming platforms. This shift was a significant step in AI research, aiming to make an agent that could adapt to various virtual worlds.

As SIMA developed, it achieved significant milestones, showing its growing skills and the team's expanding goals. It could follow natural-language commands in games, showing a human-like understanding. Working with game developers, SIMA trained across different games, mastering skills like object manipulation and understanding the game world.

Today, SIMA agents have over 600 abilities, including navigation and object interaction. They can quickly respond to commands, from simple ones like “turn left” to more complex ones like “climb the ladder” or “open the map,” usually within about 10 seconds.

SIMA's progress highlights how AI can enhance virtual experiences and pave the way for real-world applications. Its ongoing refinement indicates continued innovation in AI, changing how we interact with virtual worlds and beyond.

Exploring SIMA's Architecture

SIMA’s architecture revolves around the integration of advanced vision and language models. These models work together to interpret and interact with diverse 3D virtual environments. By fine-tuning pre-trained models to specific game settings, SIMA can understand and execute tasks based on human instructions, demonstrating human-like capabilities.

The SIMA training process involves collaborating with multiple game studios and exposing the agent to various video games and research environments. This diverse exposure allows SIMA to learn from numerous experiences, from primary navigation to complex tasks like resource mining or item crafting in games such as No Man’s Sky and Teardown. By recording human players’ actions and instructions across different games, SIMA generalizes knowledge across tasks and environments, exhibiting remarkable zero-shot capabilities.

Despite challenges like real-time execution delays and network latency, SIMA has persevered and achieved significant milestones. It has mastered the art of grounding language in perception and embodied actions, an essential step in enabling it to perform complex tasks across multiple simulated worlds. This advancement represents creating a general AI that can understand and follow arbitrary language instructions in any 3D environment.

Case Studies of SIMA’s Successful Deployments

SIMA's application extends beyond gaming into real-world scenarios, reflecting its versatility and potential impact. SIMA's deployment within the vast universe of “No Man's Sky” highlights its navigational and task-performing abilities in gaming. This demonstrates potential applications in real-world exploration tasks, such as search and rescue operations or planetary exploration.

Similarly, in the Construction Lab environment, where SIMA agents build sculptures from blocks, its object manipulation skills hint at applications in construction or manufacturing.

SIMA's AI technology holds promise across diverse industries. In healthcare, it could revolutionize simulation training for medical professionals. Educational settings could benefit from interactive learning environments simulating historical events or scientific phenomena, offering students immersive experiences.

As SIMA progresses, ethical considerations remain paramount. Its deployment must prioritize responsible interactions and adaptability without game-specific programming, ensuring it remains beneficial to humanity. While detailed case studies of SIMA's real-world deployment are limited, its foundational work in gaming environments suggests potential impacts across industries.

The Future of SIMA and Virtual World Interactions

Looking ahead, SIMA represents a groundbreaking innovation, signaling a new era in the relationship between AI and virtual worlds.

The evolution of SIMA is ready to lead to a new wave of AI agents characterized by remarkable sophistication. The next generation of SIMA agents is expected to exhibit enhanced autonomy and adaptability, equipped with advanced cognitive abilities that enable them to perform complex tasks without human guidance. Applying advanced machine learning techniques will likely empower these agents to assimilate knowledge from their interactions, adjust to novel environments perfectly, and make real-time decisions.

The implications of SIMA for Virtual Reality (VR) and Augmented Reality (AR) technologies are profound. We can envision a future where SIMA agents enrich VR experiences by creating dynamic environments that react to user inputs. In AR, SIMA could provide context-sensitive overlays that augment our interaction with the physical world, effectively diminishing the divide between our physical and digital experiences. This combination of AI with VR and AR promises to deliver immersive experiences that were once a dream.

As the capabilities of AI agents like SIMA advance, ethical considerations must remain at the core of development. It is imperative that the progression of SIMA-like agents align with the principles of fairness, transparency, and accountability. This ethical framework is essential to avoid reinforcing biases and infringing on privacy and to ensure that these agents contribute positively to human welfare, uphold human rights, and strengthen sustainable practices.

The Bottom Line

In conclusion, SIMA is a pivotal advancement in AI technology, opening boundless opportunities to enhance virtual experiences and real-world applications. Its evolution from gaming origins to diverse sectors highlights its adaptability and profound impact. As SIMA continues to progress, collaborative efforts in research and development are vital for responsible deployment and ethical considerations.

However, ethical considerations remain paramount to ensure its responsible deployment and alignment with principles of fairness and accountability. With collaborative efforts, SIMA indicates a future where AI enriches our lives in meaningful and impactful ways.

OpenAI Joins Adobe and Others as C2PA Committee Member

The Coalition for Content Provenance and Authenticity (C2PA) announced that OpenAI has joined the C2PA as a steering committee member.

This marks a significant milestone for the C2PA and will help advance the coalition’s mission to increase transparency around digital media as AI-generated content becomes more prevalent.

Joining other steering committee members, include Adobe, BBC, Intel, Microsoft, Google, Publicis Groupe, Sony, and Truepic.

OpenAI will collaborate to develop further and promote the adoption of Content Credentials, an implementation of the C2PA’s open technical standard for tamper-resident metadata that can be attached to digital content, showing how and when the content was created or modified.

Today’s announcement builds on OpenAI’s previously shared initiatives to improve transparency around digital provenance.

Earlier this year, OpenAI began attaching Content Credentials to images created and edited by DALL•E 3, the company’s latest image model, in ChatGPT and the OpenAI API.

In addition, the company also announced its plans to attach Content Credentials to video generations from Sora, the company’s text-to-video model, when the model is ready to be deployed to the public.

OpenAI’s membership and implementation of Content Credentials serve as a strong endorsement for the C2PA technical specification and advance the collective mission to help restore trust in the digital ecosystem.

“C2PA is playing an essential role in bringing the industry together to advance shared standards around digital provenance,” said Anna Makanju, OpenAI’s VP of Global Affairs. “We look forward to contributing to this effort and see it as an important part of building trust in the authenticity of what people see and hear online.”

The Coalition for Content Provenance and Authenticity (C2PA) is an open, technical standards body addressing the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of digital content.

C2PA is a Joint Development Foundation project.

The post OpenAI Joins Adobe and Others as C2PA Committee Member appeared first on Analytics India Magazine.

This YC-Backed Startup is Helping Enterprise Save Up to 90% on SaaS Expenditures with Generative AI 

Ask any enterprise customers and they will tell you how expensive it is to run a SaaS application on the cloud. Gartner projected that in 2023 alone enterprises saw about 21% increase in spending. In some cases, the average spend surged by up to 500%.

This is where YC-backed CloudEagle comes into the picture.

The San Francisco-based AI startup offers a SaaS management and procurement platform designed to help organisations optimise their software spending and streamline the management of their SaaS applications.

“A customer who spends between $1 and $2 million on SaaS applications can see 30% to 90% savings using CloudEagle,” said Prasanna Naik, co-founder of CloudEagle and a former Airbnb and Oracle exec, in an exclusive interaction with AIM.

Further, he said that the company has started experimenting with AI and generative methods in multiple areas, as it has access to data from thousands of companies.

“We have a database of around 250,000 SaaS applications, and it’s continuously growing, so the data keeps growing and our engine becomes stronger,” said Naik.

“We know how much a company with 300 employees is paying for Salesforce compared to one with 2500 employees for the same number of licenses,” he said. Naik added that the company has built a recommendation engine that suggests which application enterprises should use based on their use cases, current tech stack, industry, employee base size, and employee growth over the past couple of months or years.

Furthermore, the company utilises generative AI, enabling users to check their subscription expiration for a specific application directly within Slack. Users can type their inquiries into the Slack chatbot, which promptly generates the required information. The company is currently building its own proprietary LLM for this.

It also uses AI in contract management systems. “Our AI is able to extract all the details from contracts, including start, end, and renewal dates,” said Naik, adding that the company also keeps a history of previous contracts.

Streamlining Your SaaS Expenditure

CloudEagle was founded in 2021 by Prasanna Naik and Nidhi Jain. Jain has previously worked in various capacities at notable companies such as Box, ServiceNow, and Goldman Sachs.

Their motivation to start the company stemmed from their observations of the inefficiencies in software procurement processes at their previous workplaces.

CloudEagle’s key solutions include SaaS management and SaaS procurement. The former gives customers a complete view of their SaaS applications.

“With CloudEagle, IT department personnel can quickly spot what specific app employees use,” said Naik. He illustrated that if, say, 48 employees don’t use Salesforce in a company, CloudEagle promptly alerts the IT team to trim their licenses.

Further, he said that CloudEagle helps connect to various data sources, such as SSO and finance systems, and browser plugins make this possible. “Today, CloudEagle has more than 380 integrations,” said Naik, adding that this eliminates the need for manual tracking and spreadsheets.

On the other hand, SaaS procurement automates the process of purchasing new SaaS licenses. “CloudEagle automates manual tasks related to SaaS procurement, including approval workflows for new subscriptions and automatic reminders for renewals. This saves time for the IT and finance teams,” added Naik.

CloudEagle vs the World

BetterCloud, Spendesk, Zylo, Torii, Flexera, Cledara, Productiv, Sailpoint, Vendr, and Tropic are among CloudEagle’s top competitors. Naik said that these companies are simply basic SaaS procurement companies.

“CloudEagle is not just a SaaS procurement company; it is moving beyond that. The company has introduced a new feature called Automated License Harvesting,” said Naik.

The feature automatically detects when licenses are not actively used. For instance, if a user has not logged into an application for a specified duration, the system identifies this license as a candidate for harvesting.

“It runs weekly or monthly, according to the customer’s needs. We have automated this process, so companies no longer need to hire three more IT engineers or finance professionals continuously,” added Naik.

CloudEagle has added another new feature, called Employee Onboarding. When an employee joins the company, CloudEagle automatically assigns them applications like Hubspot, Salesforce, and Slack.

“Everything related to SaaS has to happen on CloudEagle,” concluded Naik, adding that the company will introduce new features like Access Review, which will be able to determine the type of license a particular employee has for any application.

The post This YC-Backed Startup is Helping Enterprise Save Up to 90% on SaaS Expenditures with Generative AI appeared first on Analytics India Magazine.

Ollama Tutorial: Running LLMs Locally Made Super Simple

ollama-tutorial
Image by Author

Running large language models (LLMs) locally can be super helpful—whether you'd like to play around with LLMs or build more powerful apps using them. But configuring your working environment and getting LLMs to run on your machine is not trivial.

So how do you run LLMs locally without any of the hassle? Enter Ollama, a platform that makes local development with open-source large language models a breeze. With Ollama, everything you need to run an LLM—model weights and all of the config—is packaged into a single Modelfile. Think Docker for LLMs.

In this tutorial, we’ll take a look at how to get started with Ollama to run large language models locally. So let’s get right into the steps!

Step 1: Download Ollama to Get Started

As a first step, you should download Ollama to your machine. Ollama is supported on all major platforms: MacOS, Windows, and Linux.

To download Ollama, you can either visit the official GitHub repo and follow the download links from there. Or visit the official website and download the installer if you are on a Mac or a Windows machine.

I’m on Linux: Ubuntu distro. So if you’re a Linux user like me, you can run the following command to run the installer script:

$ curl -fsSL https://ollama.com/install.sh | sh

The installation process typically takes a few minutes. During the installation process, any NVIDIA/AMD GPUs will be auto-detected. Make sure you have the drivers installed. The CPU-only mode works fine, too. But it may be much slower.

Step 2: Get the Model

Next, you can visit the model library to check the list of all model families currently supported. The default model downloaded is the one with the latest tag. On the page for each model, you can get more info such as the size and quantization used.

You can search through the list of tags to locate the model that you want to run. For each model family, there are typically foundational models of different sizes and instruction-tuned variants. I’m interested in running the Gemma 2B model from the Gemma family of lightweight models from Google DeepMind.

You can run the model using the ollama run command to pull and start interacting with the model directly. However, you can also pull the model onto your machine first and then run it. This is very similar to how you work with Docker images.

For Gemma 2B, running the following pull command downloads the model onto your machine:

$ ollama pull gemma:2b

The model is of size 1.7B and the pull should take a minute or two:

ollama-pull

Step 3: Run the Model

Run the model using the ollama run command as shown:

$ ollama run gemma:2b

Doing so will start an Ollama REPL at which you can interact with the Gemma 2B model. Here’s an example:

ollama-response

For a simple question about the Python standard library, the response seems pretty okay. And includes most frequently used modules.

Step 4: Customize Model Behavior with System Prompts

You can customize LLMs by setting system prompts for a specific desired behavior like so:

  • Set system prompt for desired behavior.
  • Save the model by giving it a name.
  • Exit the REPL and run the model you just created.

Say you want the model to always explain concepts or answer questions in plain English with minimal technical jargon as possible. Here’s how you can go about doing it:

>>> /set system For all questions asked answer in plain English avoiding technical jargon as much as possible  Set system message.  >>> /save ipe  Created new model 'ipe'  >>> /bye

Now run the model you just created:

$ ollama run ipe

Here’s an example:

ipe-response

Step 5: Use Ollama with Python

Running the Ollama command-line client and interacting with LLMs locally at the Ollama REPL is a good start. But often you would want to use LLMs in your applications. You can run Ollama as a server on your machine and run cURL requests.

But there are simpler ways. If you like using Python, you’d want to build LLM apps and here are a couple ways you can do it:

  • Using the official Ollama Python library
  • Using Ollama with LangChain

Pull the models you need to use before you run the snippets in the following sections.

Using the Ollama Python Library

You can use the Ollama Python library you can install it using pip like so:

$ pip install ollama

There is an official JavaScript library too, which you can use if you prefer developing with JS.

Once you install the Ollama Python library, you can import it in your Python application and work with large language models. Here's the snippet for a simple language generation task:

import ollama    response = ollama.generate(model='gemma:2b',  prompt='what is a qubit?')  print(response['response'])

Using LangChain

Another way to use Ollama with Python is using LangChain. If you have existing projects using LangChain it's easy to integrate or switch to Ollama.

Make sure you have LangChain installed. If not, install it using pip:

$ pip install langchain

Here's an example:

from langchain_community.llms import Ollama    llm = Ollama(model="llama2")    llm.invoke("tell me about partial functions in python")

Using LLMs like this in Python apps makes it easier to switch between different LLMs depending on the application.

Wrapping Up

With Ollama you can run large language models locally and build LLM-powered apps with just a few lines of Python code. Here we explored how to interact with LLMs at the Ollama REPL as well as from within Python applications.

Next we'll try building an app using Ollama and Python. Until then, if you’re looking to dive deep into LLMs check out 7 Steps to Mastering Large Language Models (LLMs).

Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she's working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.

More On This Topic

  • A Simple Guide to Running LlaMA 2 Locally
  • Pydantic Tutorial: Data Validation in Python Made Simple
  • Combining Pandas DataFrames Made Simple
  • Personalized AI Made Simple: Your No-Code Guide to Adapting GPTs
  • Distribute and Run LLMs with llamafile in 5 Simple Steps
  • Run an LLM Locally with LM Studio

Legion’s founder aims to close the gap between what employers and workers need

Legion’s founder aims to close the gap between what employers and workers need Kyle Wiggers 7 hours

While taking a long road trip across the U.S. years ago, Sanish Mondkar realized that there were stark, problematic disconnects between employers and the staff they employ.

To critics of late-stage capitalism, that might sound like an obvious observation. But Mondkar, who has a master’s in computer science from Cornell, says that seeing the issues up close made all the difference.

“Traveling from town to town, I couldn’t help but notice the perpetual ‘for hire’ signs plastering the windows of countless labor-intensive businesses such as retailers and restaurants,” he said. “Simultaneously, I saw employees frequently changing jobs, yet struggling to make a living wage. This disparity between employers’ needs and workers’ realities struck a chord with me.”

Inspired by this experience, as well as stints at Ariba as EVP and chief product officer at SAP, Mondkar set out to build a startup that helps companies manage their workforces — particularly contract and gig workforces. His venture, Legion, today announced it raised $50 million in funding led by Riverwood Capital with participation from Norwest, Stripes, Webb Investment Network and XYZ.

“My objective was to rebuild the enterprise category of workforce management in order to maximize labor efficiency for the businesses and deliver value to the workers simultaneously,” Mondkar said. “I wanted to differentiate the company itself with a focus on intelligent automation of WFM and the employee value proposition.”

Legion is designed to support customers — employers like Cinemark, Dollar General, Five Below and Panda Express — in managing their hourly staff by automating certain decisions, like how much labor to deploy where and when to schedule workers. Taking into account demand forecasting, labor optimization and the preferences of employees, Legion’s platform generates work schedules.

Employees whose companies are on Legion can use its mobile app to request how they want to work and set their preferred hours. Legion’s algorithm then tries to match the preferences of workers with the needs of the business.

Legion

Legion also incorporates performance management tools and a rewards program of sorts.

“We use algorithms trained on a blend of customer data and third-party data, which Legion aggregates from its partners,” Mondkar said. “This integration allows for forecasts for planning and resource allocation.”

In addition to the base scheduling features, Legion — very on trend — is leaning into generative AI with a tool called Copilot (not to be confused with Microsoft Copilot). Copilot answers questions about work informed by an organization’s employee handbook, labor standards and training content. In the coming months, Copilot will gain the ability to summarize work schedules and fulfill requests to add or delete shifts or change staffer assignments.

“In order to attract and retain staff, companies employing hourly labor must emulate gig-like flexibility,” Mondkar said. “Legion provides this with the intelligent automation of scheduling. Managers can match staff to projected demand, closing the gap between the needs of employees and the needs of the business.”

That’s all well and fine, but two concerning things stand out to me about Legion: its privacy policy and earned wage access (EWA) program.

Legion says it stores customer data for seven years by default — a long time by any measure. More concerningly, the data includes personally identifiable information like workers’ first and last names, email and home addresses, ages, photos and work preferences. Big yikes.

Legion says the data is necessary to “facilitate scheduling in compliance with labor regulations,” and that users can request that their data be deleted at any time. But I question the ease of the deletion process — and just how transparent Legion is about its data retention policies to customers.

My other gripe with Legion is InstantPay, Legion’s EWA program, which lets employees access a portion of their earned wages ahead of their scheduled paydays. Legion charges workers $2.99 for instant earned wage transfers, while next-day transfers are free — that might not sound like very much, but it can add up for a low-income worker. Legion pitches this as a benefit for hourly workers that gives them “greater flexibility” and “control” over their finances, as well as a business retention tool. But EWA programs are under scrutiny from policymakers, consumer rights advocates and employers. Legion’s mobile app.

Some consumer groups argue that EWA programs should be classified as loans under the U.S. Truth in Lending Act, which provides protections such as requiring lenders to give advance notice before increasing certain charges. These groups say EWA programs can force users into overdraft while effectively levying interest through fees.

Legion

In addition, it’s not clear whether EWA programs are a net win for employers. Walmart recently tried to combat attrition by giving hourly staff access to wages early. Instead, it found that employees using EWA tended to quit faster.

Setting aside my niggles with Legion, the company appears to be growing robustly despite competition from companies like Ceridian’s Dayforce, Quinyx, and UKG, with revenue and bookings climbing 55% and 125%, respectively, in the past year. That’s all the more impressive considering that funding for HR tech startups fell to a three-year low last year — $3.3 billion, down from $10.5 billion in 2021 — after a flurry of interests from VCs.

Legion, which makes money by charging subscriptions calculated by the number of hourly workers a company employs, plans to put its recently-raised capital toward growing its 200-staffer workforce with a focus on expanding R&D and customer-facing teams and launching go-to-market efforts in Europe.

To date, Legion’s raised $145 million.

“Legion will use our funds to fuel continued innovations in workforce management, including deep investments in R&D,” Mondkar said. “Legion has been relatively insulated from the broader tech slowdown, thanks to our focus on labor-intensive industries. This strategic alignment positions us well to navigate any potential economic headwinds effectively.”

Cisco Unveils AI-Native Cybersecurity Innovations

Cisco announced major advancements in its Security Cloud platform at the RSA Conference 2024, building on the momentum of its Hypershield launch and Splunk acquisition.

The company is reimagining security at AI-scale with rapid innovations across its unified, AI-driven security portfolio.

Advancing the SOC of the Future with Splunk Integration

Just two months after completing its $28 billion acquisition of Splunk, Cisco is accelerating customers’ journey toward the Security Operations Center (SOC) of the future:

  • Integrating Cisco XDR with Splunk Enterprise Security to improve threat detection and response by combining the strengths of each solution.
  • Launching Splunk Asset and Risk Intelligence for proactive risk mitigation through continuous asset discovery and compliance monitoring.
  • Introducing the AI Assistant for Security in Cisco XDR to empower analysts with contextual insights and automated workflows.

“The combination of Cisco and Splunk is the most comprehensive security solution for threat prevention, detection, investigation and response, utilising the cloud, endpoint traffic and Cisco’s unmatched network footprint for unparalleled visibility,” said Jeetu Patel, EVP and GM for Security and Collaboration at Cisco.

Protecting Against Unknown Vulnerabilities with Hypershield

Building on last month’s launch of Hypershield, Cisco introduced new capabilities to detect and block attacks stemming from unknown vulnerabilities within runtime workload environments.

Cisco Hypershield is a radically new, AI-native approach to securing data centres and clouds, enabling security enforcement to be placed everywhere it’s needed.

It protects applications, devices and data across any location and is “the most consequential security innovation in Cisco’s history,” according to executives.

Furthermore, Suspected workloads can also be isolated to limit a vulnerability’s blast radius.

Enabling Continuous Identity Security

Cisco also announced enhancements to enable Continuous Identity Security and combat the rise in identity-based attacks:

  • Duo Passport to provides frictionless access by minimising repeated authentication requests.
  • Cisco Identity Intelligence in Duo to assess and respond to identity risk using AI-driven analytics.

“Cisco Duo’s commitment to dynamic response to risk, coupled with an emphasis on seamless user experience, is not just timely-it’s groundbreaking,” said Todd Thiemann, Senior Analyst at Enterprise Strategy Group.

With these innovations powered by AI and its Splunk acquisition, Cisco is accelerating its platform strategy to revolutionise how customers connect and protect every aspect of their organisations in the era of AI.

The company aims to deliver unparalleled visibility, automation and resilience through the industry’s most extensive security and observability portfolio.

The post Cisco Unveils AI-Native Cybersecurity Innovations appeared first on Analytics India Magazine.