Google Bard is stepping up its AI game with these new features

Google Bard using Google Lens update

Google has been adding new features to Bard since its underwhelming launch in order to improve user experience. In its latest update, Google largely expanded Bard's language and country availability and also added many new features.

As of Thursday, Bard will be available in over 40 languages including Arabic, Chinese, and Spanish. This language availability gives Bard a major advantage over its biggest rival, ChatGPT, which is only available in English.

Also: How to use Google Bard now

This language expansion also applies to a new Bard feature — spoken responses.

Spoken responses will allow users to listen to Bard's responses in over 40 languages as opposed to only being able to read them.

This feature is meant to help understand the responses through a different medium and even help users hear the correct pronunciation of a sentence or listen to a poem or script, according to the release.

Also: ChatGPT vs Bing vs Google Bard: Which is the best AI chatbot?

To further expand Bard's accessibility, it is now available in more countries and territories including Brazil and across Europe.

Users can now personalize the chatbot's responses in terms of tone and style. The options are simple, long, short, professional, or casual.

This feature resembles the options Bing Chat has on its chatbot that allows users to pick more creative, balanced, or precise responses.

To give users the ability to revisit conversations, Bard now lets users pin and rename conversations in the sidebar.

Exporting chat responses is now also easier on Bard with a Python code-exporting option. Users will be able to export their code to Replit as well as Google Colab. A shareable link export option is also now available to share any conversation.

Also: I tested Google Bard. It was surprising — in a bad way

Lastly, users can now use images in their prompts with the integration of Google Lens into Bard. Originally announced at Google I/O, the feature will allow users to upload an image and ask Bard for more information on it or incorporate it into the prompt.

The feature is currently available in English but will be available in more languages soon, according to Google.

Artificial Intelligence

Spline, a no-code design tool for creating 3D assets, raises $15M

Spline, a no-code design tool for creating 3D assets, raises $15M Kyle Wiggers 11 hours

Spline, a no-code design tool for creating 3D assets, today announced that it raised $15 million in a seed round led by Gradient Ventures with participation from First Round Capital, NXTP, Chapter One, Vercel CEO Guillermo Rauch, Y Combinator, Webflow CEO Vlad Magdalin and Backend Capital.

Co-founder and CEO Alejandro Leon say that the proceeds, which bring Spline’s total raised to $16 million, will be put toward R&D and expanding the size of the startup’s 20-person team.

“Back in early 2020, before launching Spline, our talks with investors about investing in 3D were met with reactions of uncertainty,” Leon told TechCrunch in an email interview. “A lot has changed since then, and we also progressed a lot. 3D has now become a default content format, alongside Images, audio and video.”

Leon — who says he’s been passionate about 3D since he was a kid, even learning to code because of it — launched Spline in 2020, inspired to make the 3D design process simpler and easier to learn. Spline was accepted into one of Y Combinator’s 2021 batches and launched in beta last March.

Spline lets users create 3D objects, edit materials, add interactivity (including game controls) and animations, and export them — all from a web browser, either from scratch or pre-made objects. The platform’s collaboration features let users work together to fine-tune and comment on assets, and create real-time physics simulations and interactions between those assets.

Spline

Image Credits: Spline

Designs can be exported as image files, GIFs and more, or embedded in webpages using a few strings of code.

“Web technologies are now capable of achieving higher levels of quality. AI is also expanding into 3D, and now we have new upcoming tech that builds on a 3D and spatial foundation,” Leon said.

Leon sees generative AI as the next logical step for Spline’s platform, which competes with incumbent tools like Blender and Cinema 4D. Spline recently added AI style transfer and AI texture tools to its suite, and it’s exploring ways to use prompts for content creation using large language models — following on the heels of companies like OpenAI and Meta.

“We believe that AI can empower more designers to easily get started by reducing the friction and complexity of the 3D creation process,” Leon added. “There are a lot of complexities in the 3D creation process, so this is an ongoing effort and overall challenge, but we think AI will be an important part of making 3D creation more accessible for anyone.”

To date, over 1 million creators have joined Spline — a number that left Gradient Ventures’ Darian Shirazi impressed.

“We’re blown away by the product Alejandro and the Spline team have created,” Shirazi, a general partner at Gradient, said via email. “And the community of creators that have found Spline and fallen in love with its capabilities is equally impressive. The future of computing and communication is driven by 3D, and we believe Spline is at the crossroads of creativity … and AI.”

Google Bard Empowers Users, While ChatGPT Crumbles 

Google today announced a slew of new updates for its AI chatbot Bard. The company said that the platform would be available in over 40 languages, alongside expanding Bard’s access to more places, including Brazil and across Europe.

The new update includes nine Indian languages – Hindi, Kannada, Tamil, Telugu, Bengali, Malayalam, Marathi, Gujarati, and Urdu. Meanwhile, ChatGPT also can interact in these nine Indian languages as well as in Bhojpuri, Punjabi, and more. But, it lacks controls to engage with Indian languages seamlessly. Google is clearly way ahead in this aspect.

With its latest updates, users can now listen to Bard’s responses, alongside changing the tone and style of Bard’s responses to five different options: simple, short, long, professional or casual.

Boosting productivity for its users, the company has added new ways to pin and rename conversations with Bard in their preferred languages, making it easier to share chatbot conversations with their network.

Most interesting of all, Google announced that it is bringing the capabilities of Google Lens into Bard. In other words, users can now use Bard to ask for information about an image or generate captions. Currently, these new features are live in English (US) and will expand to new languages soon.

Read: Can Bard Hang With the Big Boys After Upgrades?

Recently, Google-backed Anthropic launched Claud-2, which is touted as a GPT-4 killer. This new language model boasts an impressive 71.2 percent score on the Codex HumanEval, a Python coding test, up from 56 percent achieved by its previous version, Claude-1.3. In comparison, GPT-4 score is 4.2 percent lower than Claud-2.

ChatGPT Crumbles

For the first time since its launch in November, the AI chatbot ChatGPT experienced a decline in website visits, suggesting a potential decrease in consumer interest towards ChatGPT. This also comes in the backdrop of growing competition, and better alternatives.

According to SimilarWeb, global desktop and mobile traffic to the ChatGPT website witnessed a decline of 9.7% in June compared to May, while unique visitors to the website dropped by 5.7%. Additionally, the data reveals an 8.5% decrease in the amount of time visitors spent on the website.

While speculations are brewing from all corners as to why ChatGPT is facing these circumstances, some experts argue that the newness of generative tools might be wearing off.

Washington Post recently claimed that the reduced usage of ChatGPT could be because of the vacation time off taken by school and college goers. One of the other speculations is AI hallucinations, where the chatbot generates inaccurate or unsatisfactory responses. The list just goes on.
Read: The Real Reason Behind ChatGPT User Decline

The post Google Bard Empowers Users, While ChatGPT Crumbles appeared first on Analytics India Magazine.

Database Optimization: Exploring Indexes in SQL

Database Optimization: Exploring Indexes in SQL
Image by Author

While searching for a particular topic in a book, we will first visit the index page (which is present at the start of that book) and find which page number contains our topic of interest. Now, imagine how inconvenient it is to find a particular topic in a book without the index page. For this, we have to search every page in the book, which is very time-consuming and frustrating.

A similar issue also occurs in SQL Server when it retrieves data from the database. To overcome this, SQL server also uses indexing which speeds up the data retrieval process, and in this article, we will cover that part. We will cover why indexing is needed and how we can effectively create and delete indexes. The prerequisite of this tutorial is the basic knowledge of SQL commands.

What is Indexing?

Indexing is a schema object that uses a pointer to retrieve data from the rows, which reduces the I/O(Input/Output) time to locate the data. Indexing can be applied to one or more columns we want to search. They store the column in a separate data structure called B-Tree. One of the main advantages of B-Tree is that it stores the data in sorted order.

If you are wondering why the data can be retrieved faster if it is sorted, then you must read about Linear Search vs Binary Search.

Indexing is one of the most famous methods to improve the performance of SQL queries. They are small, fast and remarkably optimized for relational tables. When we want to search a row without indexing, the SQL performs a full-table scan linearly. In other words, SQL has to scan every row to find the matching conditions, which is very time-consuming. On the other hand, indexing keeps the data sorted, as discussed above.

But we should also be careful, indexing creates a separate data structure which requires extra space, and that can become problematic when the database is large. For good practice, indexing is effective only on frequently used columns and can be avoided on rarely used columns. Below are some scenarios in which indexing might be helpful,

  1. Number of rows must be (>10000).
  2. The required column contains a large number of values.
  3. The required column must not contain a large number of NULL values.
  4. It is helpful if we frequently sort or group data based on particular columns. Indexing quickly retrieves the sorted data rather than performing a full scan.

And indexing can be avoided when,

  1. The table is small.
  2. Or when the values of the column are rarely used.
  3. Or when the values of the columns are frequently changing.

There may also be a chance when the optimizer detects that a full-table scan takes less time than the indexed table, then the indexing may not be used, even if it exists. This can happen when the table is small, or the column is frequently updated.

Creating Sample Database

Before starting, you must set up MySQL Workbench on your PC to easily follow the tutorial. You can refer to this youtube video for setting up your workbench.

After setting up your workbench, we will create some random data from which we can execute our queries.

Creating Table:

-- Create a table to hold the random data    CREATE TABLE employee_info (id INT PRIMARY KEY AUTO_INCREMENT,                                                 name VARCHAR(100),                                                      age INT, email VARCHAR(100));

Inserting Data:

-- Insert random data into the table    INSERT INTO employee_info (name, age, email)  SELECT CONCAT('User', LPAD(ROW_NUMBER() OVER (), 5, '0')),         FLOOR(RAND() * 50) + 20,         CONCAT('user', LPAD(ROW_NUMBER() OVER (), 5, '0'), '@xyz.com')  FROM information_schema.tables  LIMIT 100;

It will create a table named employee_info having attributes like name, age and email.

Show the Data:

SELECT *  FROM employee_info;

Output:

Database Optimization: Exploring Indexes in SQL
Fig. 1 Sample Database | Image by Author Creating and Deleting an Index

For creating an index, we can use the CREATE command like that,

Syntax:

CREATE INDEX index_name ON TABLE_NAME (COLUMN_NAME);

In the above query, index_name is the name of the index, table_name is the name of the table and the column_name is the name of the column on which we want to apply indexing.

Ex-

CREATE INDEX age_index ON employee_info (age);

We can also create indexes for multiple columns in the same table,

CREATE INDEX index_name ON TABLE_NAME (col1,                                         col2,                                         col3, ....);

Unique Index: We can also create a unique index for a particular column that doesn’t allow duplicate values to be stored in that column. This maintains the integrity of the data and also further improves the performance.

CREATE UNIQUE INDEX index_name ON TABLE_NAME (COLUMN_NAME);

Note: Indexes can be automatically created for PRIMARY_KEY and UNIQUE columns. We don't have to create them manually.

Deleting an Index:

We can use the DROP command to delete a particular index from the table.

DROP INDEX index_name ON TABLE_NAME;

We need to specify the index and table names to delete the index.

Show Indexes:

You can also see all the indexes present in your table.

Syntax:

SHOW INDEX  FROM TABLE_NAME;

Ex-

SHOW INDEX  FROM employee_info;

Output:

Database Optimization: Exploring Indexes in SQL Updating an Index

The below command creates a new index in the existing table.

Syntax:

ALTER TABLE TABLE_NAME ADD INDEX index_name (col1, col2, col3, ...);

Note: The ALTER is not a standard command of ANSI SQL. So it may vary among other databases.

For ex-

ALTER TABLE employee_info ADD INDEX name_index (name);    SHOW INDEX  FROM employee_info;

Output:

Database Optimization: Exploring Indexes in SQL

In the above example, we have created a new index in the existing table. But we cannot modify an existing index. For this, we must first drop the old index and then create a new modified one.

For ex-

DROP INDEX name_index ON employee_info;      CREATE INDEX name_index ON employee_info (name, email);    SHOW INDEX  FROM employee_info ;

Output:

Database Optimization: Exploring Indexes in SQL Wrapping it Up

In this article, we have covered a basic understanding of SQL Indexing. It is also advised to keep indexing narrow, i.e., limited to a few columns, because more indexing can negatively impact performance. Indexing speeds us the SELECT queries and WHERE clause but slows down the insert and update statements. Therefore, applying indexing only on the frequently used columns is a good practice.

Until then, keep reading and keep learning.
Aryan Garg is a B.Tech. Electrical Engineering student, currently in the final year of his undergrad. His interest lies in the field of Web Development and Machine Learning. He have pursued this interest and am eager to work more in these directions.

More On This Topic

  • Exploring the SwAV Method
  • Exploring Unsupervised Learning Metrics
  • Exploring Data Distributions with Histograms
  • Exploring Data Cleaning Techniques With Python
  • The Future of AI: Exploring the Next Generation of Generative Models
  • Exploring the Latest Trends in AI/DL: From Metaverse to Quantum Computing

Meet Stable Doodle, the Doodling Cousin of Stable Diffusion 

Stable Diffusion maker Stability AI has unveiled Stable Doodle, a sketch-to-image tool to convert simple drawings into high-quality images.

Developed by AI-based image editing platform Clipdrop and Stability AI, Stable Doodle can be accessed for free on the Clipdrop by Stability AI website, along with the latest Stable diffusion model SDXL 0.9. In March, Stability AI acquired Init ML, the creator of Clipdrop.

Stable Doodle is designed to cater to both experienced users and beginners, regardless of their familiarity with AI tools. By harnessing the power of Stable Doodle, anyone with basic drawing skills and internet access can generate high-quality original images within seconds.

Stable Doodle allows for artistic customisation, offering 14 styles to choose from via Stable Diffusion XL. These styles range from realistic photography to cinematic aesthetics to imaginative fantasy art and origami-inspired designs.

This is not the first time that we have a sketchy cousin of Stable Diffusion. Earlier, an engineer from Replicate, who goes by the GitHub name zeke, developed Scribble Diffusion to convert hand-drawn artwork, along with an accompanying text prompt, into a new art.

Decoding the Engineering of Stable Doodle

Stable Doodle combines the image-generation technology of Stability AI’s Stable Diffusion XL with the formidable T2I-Adapter. Developed by Tencent ARC (license), the T2I-Adapter is a precise condition control solution that enhances AI image generation.

By introducing trainable parameters to existing large diffusion models, the T2I-Adapter allows for the incorporation of additional input conditions like sketches, segmentation maps, or key poses.

This framework supports multiple models for input guidance simultaneously, granting enhanced control over the generation process. In the context of Stable Doodle, the T2I-Adapter supplements the pre-trained text-to-image model (SDXL), enabling it to comprehend sketch outlines and produce images based on prompts combined with the defined outlines.

The T2I-Adapter network consists of approximately 77 million parameters, delivering additional guidance to pre-trained text-to-image (SDXL) models while maintaining the integrity of the original large text-to-image models.

Mostaque is Always on the Go

At the Bloomberg Technology Summit held in San Francisco, Stability AI’s CEO, Emad Mostaque, acknowledged the concerns surrounding the creation of realistic AI-generated deepfakes during an on-stage interview. Mostaque disclosed that the company had developed “photo-realistic models” but decided against releasing them at that time due to various considerations. He stressed the importance of implementing features such as watermarking to establish standards that enable tracking and appropriate usage of AI-generated content.

Recently, Mostaque gained attention again for his statement during an interview with Peter H. Diamandis for the Moonshots and Mindsets Podcast. He claimed that within the next five years, human programmers would become obsolete and that 41% of code on platforms like GitHub is generated by AI. However, some users have pointed out that there is no data available to support this assertion.

The post Meet Stable Doodle, the Doodling Cousin of Stable Diffusion appeared first on Analytics India Magazine.

Exploring Tree of Thought Prompting: How AI Can Learn to Reason Through Search

Exploring Tree of Thought Prompting: How AI Can Learn to Reason Through Search
Image created by author with Midjourney Key Points

  • A new paper proposes a "Tree of Thoughts" framework to allow more deliberate problem-solving
  • Represent the reasoning process as search over a tree of possible "thoughts"
  • Use the LLM itself to generate and evaluate these thoughts
  • Employ classic search algorithms to guide the exploration

Introduction

Recently, large language models (LLMs) like GPT-3 have shown impressive abilities in areas like mathematical reasoning and commonsense knowledge. However, their basic text generation method — left-to-right, token-by-token — can limit strategic planning and exploration. The paper shows this approach significantly improves LLM problem-solving abilities on challenges like math puzzles and creative writing.

Discussion

A recent paper, Tree of Thoughts: Deliberate Problem Solving with Large Language Models — by Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan — proposes a new framework called "Tree of Thoughts" (ToT) to enhance the problem-solving abilities of large language models (LLMs) like GPT-3 and GPT-4. Currently, LLMs are limited to left-to-right token-level decision making when generating text, which can fall short in tasks requiring more strategic planning and exploration.

ToT represents the problem-solving process as search over a tree, where each node is a "thought" — a coherent chunk of text representing an intermediate reasoning step. This allows the LLM to explore multiple reasoning paths and evaluate the progress of different thoughts towards solving the problem. Specifically, the framework involves:

  1. Decomposing the problem into coherent thought steps based on the task structure.
  2. Using the LLM to generate multiple thought candidates at each step, either independently or sequentially conditioned on previous thoughts.
  3. Getting the LLM to evaluate the promise of different states (partial solutions) through value estimation prompts that assess progress so far.
  4. Using classic search algorithms like breadth-first search or depth-first search over the tree, using the LLM's value estimates to guide exploration and pruning.

This deliberate search allows the LLM to look ahead, backtrack, and make more global choices when needed. The modular framework is model-agnostic and can flexibly adapt its components like thought size, generation, evaluation, and search to the problem structure.

The authors demonstrate ToT on three novel tasks — Game of 24, Creative Writing, and Mini Crosswords. In all cases, ToT significantly boosts the problem-solving performances of GPT-4 over standard prompting baselines. For example, in Game of 24 the success rate increased from 4% with chain-of-thought prompting to 74% with ToT.

Overall, ToT offers a way to integrate symbolic planning and search methods from classical AI with modern LLMs. The interpretability of its language-based thoughts and deliberation also provides opportunities for better human alignment. The authors propose it as an exciting new direction to develop more general problem-solving capabilities in LLMs.

Research Q&A

How does the Tree of Thoughts approach compare to other methods that incorporate symbolic planning or search with neural models, such as NeuroLogic decoding or the LLM+P framework?

The ToT framework differs in that it uses the LLM itself to provide heuristic guidance during search, rather than relying on a separate classical planner (LLM+P) or hard-coded heuristics (NeuroLogic). The language-based thought representation is also more flexible than symbolic planning languages. However, ToT does not yet achieve the level of tight integration and two-way communication between the LLM and planner components that LLM+P demonstrates.

Could the Tree of Thoughts approach be applied to natural language tasks like conversational dialogue or story generation, rather than just constrained reasoning tasks?

While the current paper focuses on reasoning tasks, the general framework of representing possible continuations as thoughts that can be deliberated over seems applicable to less constrained generation problems. For dialogue, thoughts could be candidate utterances to say next, while for stories they could be plot points or character actions. The key challenges would be defining coherent thought steps and developing effective evaluation prompts.

What is innovative about this research?

The key innovation is framing language model inference as search over a tree of thoughts rather than just left-to-right token generation. This allows more deliberate planning, exploration of alternatives, and global lookahead/backtracking. Representing thoughts as coherent semantic units is also innovative compared to previous search methods.

What are the broader implications of this research?

This research could significantly enhance the problem-solving and reasoning capabilities of LLMs, allowing their use in more complex real-world applications like coding, data analysis, robotics, etc. It also makes model decisions more interpretable. The integration of classical search methods with neural models is an exciting direction.

What are some potential issues or oversights with this research as presented, if any?

The tasks explored are still relatively simple. It remains to be seen if the approach scales to more open-ended problems. The search process likely incurs higher compute costs than standard sampling. The heuristics for pruning suboptimal branches are currently imperfect.

What are the logical next research steps from this research?

Important next steps are exploring ToT on more complex planning and decision making tasks, integrating it with external knowledge retrieval, and studying whether variants can be learned more sample-efficiently via meta-learning or reinforcement learning rather than relying solely on a pre-trained LLM. Analyzing the interplay between thought size, search budget, and performance is also an open question.

Takeaways

  • The Tree of Thoughts paradigm demonstrates how classical search techniques can be integrated with modern neural network models.
  • Allowing LLMs to explore alternate reasoning paths makes their decision-making more interpretable.
  • This research direction could enhance LLMs' applicability to complex real-world planning and analysis tasks.
  • Key next steps are extending the approach to less constrained problems, improving the search efficiency, and studying how such skills can be learned.
  • Overall, the deliberate and semantic reasoning of Tree of Thoughts offers an exciting new capability for artificial agents.

Matthew Mayo (@mattmayo13) is a Data Scientist and the Editor-in-Chief of KDnuggets, the seminal online Data Science and Machine Learning resource. His interests lie in natural language processing, algorithm design and optimization, unsupervised learning, neural networks, and automated approaches to machine learning. Matthew holds a Master's degree in computer science and a graduate diploma in data mining. He can be reached at editor1 at kdnuggets[dot]com.

More On This Topic

  • Unraveling the Power of Chain-of-Thought Prompting in Large Language Models
  • Simplifying Decision Tree Interpretability with Python & Scikit-learn
  • Attend the Data Intelligence Summit to Learn from Data Thought Leaders
  • Hyperparameter Tuning Using Grid Search and Random Search in Python
  • Building a Visual Search Engine — Part 2: The Search Engine
  • Exploring the SwAV Method

China unveils provisional rules for generative AI, including a licensing regime

China unveils provisional rules for generative AI, including a licensing regime Rita Liao 8 hours

As the use cases of generative AI see explosive adaption, China has taken a leading role in defining how the rapidly changing technology should be used, including a licensing regime for service providers.

On Thursday, China’s top cyberspace regulator unveiled a set of provisional rules to govern generative AI services, including API providers, that serve China-based users.

The question then is if China’s quick response to rein in generative AI and its stringent rules will stifle innovation. The policymakers are well aware of the concern, stressing in the document that the rules aim to “balance development and security.”

First and foremost, the rules require generative AI providers to adhere to core socialist values, which prohibit everything from pornography and terrorism to racism and content that threatens China’s national security.

Algorithms that can influence public opinions, the rules say, must be registered with the relevant authority. Generative AI service providers should also obtain an administrative license in accordance with the law, although the document doesn’t specify who is required to do that.

When it comes to user protection, the rules stipulate that algorithms must not be discriminatory based on factors such as ethnicity, gender, age, occupation, or health, and should not be used for anti-competitive behavior. Service providers are encouraged to create an anti-addiction system for underage users, similar to those used in video gaming.

Service providers are responsible for identifying and stopping the generative process for illegal content, and subsequently correcting the algorithms and reporting the incident to the relevant authority. That means prompts into an image generator or chatbot could potentially lead to legal trouble for individuals.

Moreover, regulators have the right to know the specifics of a generative AI model, including its training data, size, type, tagging rules and algorithms.

Lastly, AI development in China has been a top-down effort. The document is calling for the creation of a public data training platform and the sharing of computing power. Concrete rules have already been proposed in Beijing for a state-backed, centralized platform that allocates public cloud resources based on customer needs

As with other critical industries, China is calling for “self-reliant innovation” in AI algorithms, frameworks, chips, software platforms, and other infrastructure, while still encouraging “equal and mutually beneficial” international cooperation.

China’s generative AI rules set boundaries and punishments for misuse

Elon’s xAI is here, But y? 

When OpenAI decided to charge $42 for its ChatGPT, it did not go as planned. But, Musk seems to have cracked the code – he launched xAI on 12 July, 2023. When we add 7+12+23 = 42 – the answer to the universe.

Philosophy aside, if we trace the timeline of Musk’s decisions, they all had been hinting towards his desire to own an AI company. Elon Musk, purchased Twitter last year, has aspired to build what he calls “X, the everything app.” In April he renamed Twitter Inc. to X Corp. In that same month, Elon Musk made the decision to discontinue the free API access to Twitter. This move came as he recognized that developers were scraping data from the platform, which they could then use to train their own language models (LLMs).

This strategic move by Musk came in the wake of his expressed desire to develop his own Chatbot, named ‘TruthGPT.’ The anticipation surrounding xAI and its upcoming revelations highlights Musk’s continued dedication to pushing the boundaries of artificial intelligence and his ambition to delve into the depths of understanding reality.

xAI’s website states that its mission is to “understand reality,” and it proudly reveals that it consists of a talented team of engineers hailing from renowned U.S. tech giants such as Google, OpenAI, and Microsoft.

Announcing formation of @xAI to understand reality

— Elon Musk (@elonmusk) July 12, 2023

The website of xAI says “We are a separate company from X Corp, but will work closely with X (Twitter), Tesla, and other companies to make progress towards our mission.” The website highlights the suitability of Twitter’s conversation data for training large language models, such as the one powering ChatGPT. xAI will be advised by Dan Hendrycks who currently serves as the director of the Center for AI Safety.

Meanwhile, Tesla’s expertise in designing specialized AI chips and building robust computing clusters for AI applications could potentially enhance xAI’s cloud computing capabilities. In addition to that, Financial Times reported earlier this year that Musk bought 10,000 GPUs from Nvidia for use at one of the company’s two remaining data centers. Additionally, as Tesla ventures into the development of a humanoid robot, there is potential for collaboration and mutual benefit between xAI and Tesla’s project in the future.

Elon Musk had been actively reaching out to AI researchers since February of this year with the goal of establishing a new research lab. The lab’s objective was to create an alternative to ChatGPT, the popular language model. As part of this initiative, Musk successfully recruited Igor Babuschkin, a senior staff research engineer who had recently departed from DeepMind. Not surprisingly, Babuschkin is now a member of the xAI team. Earlier this year, they had discussions about forming a team dedicated to AI research and product development, which aligns with the endeavours of xAI.

Is Musk late?

The xAI name and its vision to ‘understand reality’ creates an essence of mystery. It seems that Musk is trying to cover up the fact that he is behind his competitors by creating a bizarre hype around xAI.

Musk even started a thread on Twitter asking followers “What are the most fundamental unanswered questions?” Is this a marketing gimmick or Musk seriously looking for answers? For now, it seems like a plot to undermine his rivals ChatGPT, Bard and Claude 2 as he is far behind in the race.

What are the most fundamental unanswered questions?

— xAI (@xai) July 12, 2023

Time is slipping out of Musk’s hands as Zuckberg also played his card by launching Threads. Zuckerberg knows there is something powerful behind the idea of training a chatbot based on a social network’s data. The goal of building an alternative to Twitter has possibly now turned into building an alternative to OpenAI.

Surprisingly, Musk accepted that. On Twitter Space discussion on Wednesday he mentioned “xAI is really just kind of starting out here, it will be a while before it’s relevant on the scale of OpenAI Microsoft AI or Google DeepMind AI. Those are really the two big gorillas in the Ai right now by far.”

Elon Musk in March 2023 had urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present “profound risks to society and humanity.” Now it makes sense why he did that. It was simply a strategic move to ensure his competitors remained mindful of the potential consequences and to maintain a check on the progress being made in the field so that he could catch up with them.

Summing up

When Elon Musk bought Twitter in October, last year, many criticised the move and wrote off Musk from the tech ecosystem. Nobody was sure about his motive of spending $44 billion on the microblogging platform. Nine months later, things are falling in place. The announcement of xAI hints towards the ultimate goal of Twitter acquisition.

The post Elon’s xAI is here, But y? appeared first on Analytics India Magazine.

Mistakes That Newbie Data Scientists Should Avoid

Mistakes That Newbie Data Scientists Should Avoid Cover
Photo by Andrea Piacquadio

We all know how demanding the Data Science field is at the moment. With more and more people entering it, from all sorts of backgrounds. Some with Computer Science degrees, some with no tech degree background at all.

This makes it more difficult for those candidates with a little technology background to enter the field and not make common mistakes. Below is a list of these common mistakes, so you know what to avoid in your job search journey.

Don’t Underestimate Formal Education

If you have a search for Data Science degrees, most of them require education. Although there are many BootCamps and courses out there that complement your resume, many recruiters are looking for candidates with some form of technical degree and/or master's degree.

The bright side is that more universities are offering data science programs and online courses to get you to the level of knowledge required to comfortably apply for Data Science roles. There is the possibility of going self-taught, however, that requires a lot more independent effort and determination. It’s a harder route, but it can happen.

If you would like to check out some free university resources, have a look at this: Free University Data Science Resources

Focusing on the Theory and Not on the Projects

It is typically for newbies in a new industry to focus heavily on theory work; they want to have a great understanding just in case someone asks them a question. However, try not to dig too deep into it and start to focus on projects which present your skills and practical applications.

These will test your level of theory and give you a better understanding of where and where not to apply it. Learning the theory whilst applying it will improve your likelihood of succeeding in the field and mastering the two.

There are so many free datasets out there where you can play around and test your knowledge. You’re not limited at all, you just need to take the jump.

If you would like to know more about some potential projects you can work on, have a look at this: Top Data Science Projects to Build Your Skills

Trying to Fly to the Top of the Ladder

Many people enter the Data Science world with hopes of working with self-driving cars or medicine. This requires a lot of deep learning knowledge which doesn’t come to you overnight; it takes time. Years even. You will need to have experience working with simple datasets, building machine learning algorithms, and more.

It’s all a process that can’t be rushed; therefore you can’t just automatically enter your field of interest, you need to work towards it.

Accepting that you will have to be a junior for maybe a year or two and then have to work on machine learning projects for the next 5 years is a good reality check for you to achieve your end goal.

Resume

Resumes are always difficult because you want to sell yourself but sometimes that can lead to your resume looking too messy. In Ladders 2018 Eye Tracking study, they revealed that recruiters spend on average 7.4 seconds scanning each resume.

You can imagine how many people are applying for Data Science roles, and how overwhelming it can be for recruiters that come across resumes that are filled up with a lot of information. Rather than doing this, paint an easy picture to the recruiter with important points through bullet points and a good structure.

This automatically increases your chances of moving on to the next step.

Preparing Yourself for the Interview

Many Data Science graduates are constantly applying for job after job, and when someone gives them a call back; they’ve spent so much time and energy applying for jobs that they haven’t actually prepared for the interview stage. The easy part was applying, the hardest part is trying to win the recruiter over.

Each technology company can do its recruitment phase differently, however, they are typically the same. It can start with an initial call which then moves on to coding assessments, which can either be requested to be done remotely or in the office.

This is where your skills are really going to be tested and you want to ensure that you are prepared for it. You will be tested on your hard skills aswell as your soft skills; so try not to neglect one for the other.

If you’re looking for more information to help you with this, have a read of this:

  • Data Science Interview Guide – Part 1: The Structure
  • Data Science Interview Guide – Part 2: Interview Resources

Job Search Effectively

Don’t just apply through a job title; use your skills to help your search. There are going to be many openings for Data Scientists but you may not have the skills they require. In order to do this, you need to make sure that you read the description and requirements to see if you are a good match.

Searching using the skills you do have will narrow your search and save you a lot of time and energy applying to thousands of jobs that may not reply. You can search by job responsibilities such as Predictive Modeling or skills such as SQL.

Understand the Sector You are Entering

Data Scientists are in high demand in nearly every industry at the moment; from finance to fashion. When applying for jobs; it is imperative you understand the sector. You don’t want to start a career as a Data Scientist for a Bank with no knowledge of how banks work and the terminology.

If you do that; you are literally throwing yourself into the deep end and it may be very hard for you to get out of it. You will end up hating your job and your choice of career; so ensure that you are entering the sector you wish with a sufficient amount of knowledge.

Conclusion

These are the basics that will help you have an effective strategy for entering the world of Data Science. They are such common mistakes that can be easily resolved. If you want to know more about industries that are employing, have a read of this: Top Industries and Employers Hiring Data Scientists in 2022

Nisha Arya is a Data Scientist and Freelance Technical Writer. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.

More On This Topic

  • 10 Mistakes You Should Avoid as a Data Science Beginner
  • 5 Data Science Career Mistakes To Avoid
  • Avoid These Mistakes with Time Series Forecasting
  • 6 Mistakes To Avoid While Training Your Machine Learning Model
  • How Data Scientists Can Avoid 'Lost in Translation' Syndrome When…
  • 15 common mistakes data scientists make in Python (and how to fix them)

Stability AI releases Stable Doodle, a sketch-to-image tool

Stability AI releases Stable Doodle, a sketch-to-image tool Kyle Wiggers 8 hours

Stability AI, the startup behind the image-generating model Stable Diffusion, is launching a new service that turns sketches into images.

The sketch-to-image service, Stable Doodle, leverages the latest Stable Diffusion model to analyze the outline of a sketch and generate a “visually pleasing” artistic rendition of it. It’s available starting today through ClipDrop, a platform Stability acquired in March through its purchase of Init ML, an AI startup founded by ex-Googlers,

“Stable Doodle is geared toward both professionals and novices, regardless of their familiarity with AI tools,” Stability AI writes in a blog post shared with TechCrunch via email. “With Stable Doodle, anyone with basic drawing skills and online access can generate high-quality original images in seconds.”

There’s plenty of sketch-to-image AI tools out there, including open source projects and ad-supported apps. But Stable Doodle is unique in that it allows for more “precise” control over the image generation, Stability AI contends.

Under the hood, powering Stable Doodle is a Stable Diffusion model — Stable Diffusion XL — paired with a “conditional control solution” developed by one of Tencent’s R&D divisions, the Applied Research Center (ARC). Called T2I-Adapter, the control solution both allows Stable Diffusion XL to accept sketches as input and guides the model to enable better fine-tuning of the output artwork.

Stable Doodle

Image Credits: Stability AI

“T2I-Adapter enable[s] Stable Doodle to understand the outlines of sketches and generate images based on prompts combined with the outlines defined by the model,” Stability AI explains in the blog post.

This writer didn’t have the opportunity to test Stable Doodle prior to its release. But the cherry-picked images Stability AI sent me looked quite good, at least in comparison to the doodle that inspired them.

In addition to a sketch, Stable Doodle accepts a prompt to guide the image generation process, such as “A comfy chair, ‘isometric’ style” or “Cat with a jeans jacket, ‘digital art’ style.” There’s a limit to the customization, though — at launch, Stable Doodle only supports 14 styles of art.

Stability AI envisions Stable Doodle serving as a tool for designers, illustrators and other professionals to “free up valuable time” and “maximize efficiency” in their work. At the same time, the company cautions that the quality of output images is dependent on the detail of the initial drawing and the descriptiveness of the prompt, as well as the complexity of the scene being depicted.

“Ideas drawn as sketches can be immediately implemented into works to create designs for clients, material for presentation decks and websites or even create logos,” the company proposes. “Moving forward, Stable Dooble will enable users to import a sketch. Further, we will include use cases for specific verticals, including real estate applications, for example.”

Stable Doodle

Image Credits: Stability AI

With tools like Stable Doodle, Stability AI is chasing after new sources of revenue following a lull in its commercial endeavors. (Stable Doodle is free, but subject to limits.) In April, Semafor reported that Stability AI was burning through cash, leading to an executive hunt to help ramp up sales.

Last month, Stability AI raised $25 million through a convertible note (i.e. debt that converts to equity), bringing its total raised to over $125 million. But it hasn’t closed new funding at a higher valuation. The startup was last valued at $1 billion; reportedly, Stability was seeking to quadruple that within the next few months.