This AI-powered robot vacuum is one of my favorites and just hit an all-time low price

Eufy Clean X9 Pro CleanerBot

What's the deal?

The Eufy X9 Pro is seeing a limited-time 39% discount at Amazon, bringing its price down from $900 to $550, but you can get it for $449 when you enter code EUFYX9PRO at checkout.

ZDNET's key takeaways

  • The Eufy Clean X9 Pro CleanerBot is available for $900.
  • It's a convenient robot vacuum/mop combo device with impressive cleaning abilities powered by AI.
  • However, the dust bin is quite small and needs to be emptied manually after each cleaning.

The Eufy Clean X9 Pro CleanerBot, a new 2-in-1 robot vacuum, boasts a deep cleaning, hands-free mopping experience, coupled with 5,500pa of suction power. It also uses some impressive AI-powered navigation features to maneuver throughout your house.

Also: I tested my favorite two-in-one robot vacuum's new model, and it's better in almost every way

Initially, I was less than enthusiastic about trying out yet another robot mop vacuum (I'd tested a similar one recently), but once I watched the Eufy X9 Pro work its way across my home floors, my mind was changed.

View at Amazon

The CleanerBot truly lives up to the name, outperforming my old Roborock and the Yeedi MopStation Pro in vacuum and mop functions. The suction power, 5,500pa at maximum capacity, is outstanding. And the main brush is bristle-less, made of silicone wedges instead that are just as effective at cleaning floors.

In my limited experience (as I've only tested this model for about a week), the primary silicone brush makes it less likely for the X9 Pro to get tangled, as it's easier to scoop debris up than sweep it.

The mopping function on the Eufy X9 Pro CleanerBot is one of the two features that impressed me the most. The X9 Pro has two rotating mop pads — which I love in a robot vac/mop combo — which put 2.2 lbs of downward pressure to break down tough stains, a particularly useful feat for my home of children and pets.

Review: Roborock S8 Pro Ultra: This 2-in-1 vacuum can do just about everything

The other outstanding feature, and probably my favorite, is the use of AI for navigation, obstacle avoidance, and mapping. The CleanerBot has time-of-flight sensors and an AI camera system, called AI See, that helps detect and avoid objects so the vacuum doesn't suck up your kids' socks or stuffed animals.

It also uses iPath Laser Navigation to create maps of your home, which separates the rooms by color in the Eufy Clean app and even shows you the obstacles that the robot has found in each room. When you review the map after cleaning, you'll find things like power cords, shoes, and trash cans marked on the map.

Eufy isn't the first to use this technology for obstacle avoidance and mapping, but it is a great feature. I hate having to pick up every last bit of paper my kids dropped before I can start cleaning — only to have the robot vacuum get stuck anyway on a power cord somewhere.

Also: This robot vacuum has a brilliant self-cleaning feature I didn't know I needed

The Eufy Clean app lets you customize settings for charging, cleaning intensity, voice, and more. And it also enables you to choose from the rooms that the robot automatically created on the map so you can send it to clean just that area, like a muddy entryway. You can choose to clean zones as small as 1.6 ft by 1.6 ft on the map in case of spills.

The Eufy Clean X9 Pro CleanerBot easily adjusts to uneven surfaces to cross up to 2 cm barriers.

Beyond the AI See camera set, the CleanerBot has a sensor to detect floor types in case you're running the X9 Pro in vacuum and mopping mode and it reaches a carpet or a rug. Once the robot detects a rug or carpet, it raises the mop pads to keep them off the mat and only vacuums on the soft surface.

Also: Best robot vacuums you can buy right now

Here's another thing I was glad to see: The X9 returns dutifully to its station to wash the mop pads rather than wait until they're overdue for a cleaning. I don't want to see my robot mop dragging dry, dirty mopping pads minutes after it should've returned for a refresh, but I haven't found this to be a problem with the X9.

ZDNET's buying advice

The Eufy Clean X9 Pro CleanerBot costs $900 and is the perfect option for someone looking for a robot vacuum and mop combination for a home with a lot of hard floors, whether that's tile or hardwood, with some carpet or rugs mixed in.

It doesn't have a self-emptying dustbin, and the dustbin itself has to be emptied after each cleaning as it's pretty tiny. Still, the mopping feature and the suction power are impressive, especially as the mop can pick up stains and dirt that my Yeedi MopStation Pro left behind.

Featured reviews

Enhancing data lineage and metadata management in ELT pipelines

Active_metadata_blogpost_1200x628

ELT pipelines facilitate the seamless movement of data from source systems to target destinations, enabling transformation and analysis along the way. However, as data traverses through these pipelines, maintaining visibility into its lineage and managing metadata becomes paramount for ensuring data quality, compliance, and governance.

Understanding data lineage in ELT

Data lineage refers tracking data as it moves through various processing stages. In an ELT pipeline, data lineage encompasses the journey of data from its origins in source systems through the extraction, loading, and transformation phases, ultimately culminating in its consumption by end-users or downstream applications.

Extract:

At the outset of the ELT process, data is extracted from sources such as databases, files, APIs, and streaming platforms. Each extraction point represents a critical juncture in the data lineage, capturing the source of the data and the conditions under which it was retrieved.

Load:

Data is loaded into a centralized repository or data lake following extraction, which awaits transformation. The loading phase introduces additional metadata about the destination schema, data formats, and storage configurations, further enriching the data lineage.

Transform:

During the transformation phase, data undergoes a series of manipulations to conform to the desired structure, quality, and semantics. Transformations may include cleansing, enrichment, aggregation, and normalization, each leaving its imprint on the data lineage trail.

Consumption:

Upon completion of transformations, the transformed data becomes available for consumption by analysts, data scientists, and business users. The data’s final destination marks the culmination of its journey, with metadata capturing details of its usage, access patterns, and downstream dependencies.

Importance of metadata management

Metadata serves as the lifeblood of ELT pipelines, providing contextual information about the underlying data assets and their characteristics. Effective metadata management encompasses creating, capturing, storing, and governance of metadata throughout the data lifecycle.

Schema metadata:

Metadata about the structure and schema of data entities plays a pivotal role in ELT pipelines. Schema metadata includes field definitions, data types, constraints, and relationships, aiding in data discovery, integration, and lineage tracing.

Transformation metadata:

Metadata documenting the transformations applied to data offers insights into the logic, rules, and algorithms governing data manipulation. Transformation metadata facilitates reproducibility, auditability, and troubleshooting of ELT processes.

Operational metadata:

Operational metadata captures runtime metrics, execution logs, and performance indicators associated with ELT pipelines. This metadata enables pipeline operation monitoring, optimization, and governance, ensuring SLA compliance and resource efficiency.

Leveraging data lineage and metadata for governance and compliance

Organizations must uphold stringent governance and compliance standards across their ELT pipelines in an era marked by heightened regulatory scrutiny and data privacy concerns. Data lineage and metadata management serve as foundational pillars for achieving these objectives.

Regulatory compliance assurance:

Data lineage and metadata management are pivotal in complying with regulatory mandates, including GDPR, CCPA, HIPAA, SOX, and more. By capturing detailed lineage information and metadata annotations at each stage of the ELT process, organizations can establish a clear audit trail that traces the origin, usage, and transformations applied to sensitive data elements.

  • GDPR Compliance: The GDPR mandates stringent data protection measures and requires organizations to implement mechanisms for tracking the movement and usage of personal data. ELT pipelines augmented with robust data lineage capabilities enable organizations to identify and map personal data across disparate systems, monitor data access patterns, and facilitate timely response to data subject requests (DSRs) such as data access, rectification, and erasure.
  • CCPA Compliance: The CCPA empowers consumers with the rights to access, delete, and portability of their personal information. By leveraging data lineage and metadata, organizations subject to CCPA can fulfill their obligations by providing transparency in collecting, sharing, and processing consumer data, thus enhancing trust and accountability.
  • HIPAA Compliance: Healthcare organizations subject to the Health Insurance Portability and Accountability Act (HIPAA) must safeguard protected health information (PHI) and adhere to stringent data privacy and security standards. Data lineage and metadata management enable healthcare entities to track the flow of PHI across ELT pipelines, enforce access controls, and demonstrate compliance with HIPAA’s requirements for data integrity, confidentiality, and auditability.

Risk mitigation and data governance:

In addition to regulatory compliance, data lineage and metadata are indispensable tools for risk management, data governance, and decision-making processes within organizations. By providing visibility into data flows and transformations, ELT pipelines empower stakeholders to identify, assess, and mitigate risks associated with data lineage ambiguity, lineage breaks, and data quality issues.

  • Risk Identification and Assessment: Data lineage analysis facilitates the identification of critical data assets, their lineage dependencies, and associated risks. By analyzing lineage metadata, organizations can pinpoint potential vulnerabilities such as data silos, redundant processes, and unauthorized data access, enabling proactive risk mitigation strategies.
  • Impact Analysis and Change Management: Changes to data structures, schemas, or business rules. ELT pipelines can have far-reaching implications on downstream applications and analytical outputs. Data lineage enables organizations to conduct impact analysis. It assess the ripple effects of proposed changes, and implement robust change management processes to minimize disruptions and ensure continuity of operations.
  • Data Quality Assurance: Metadata-driven data quality controls and lineage monitoring mechanisms. It enables organizations to uphold data integrity, accuracy, and consistency throughout the ELT lifecycle. By leveraging lineage metadata to track data transformations, anomalies, and discrepancies. Organizations can implement proactive quality assurance measures such as data profiling, validation rules, and anomaly detection algorithms. It enhances the reliability and trustworthiness of analytical insights.

Transparency, accountability, and value realization:

Ultimately, effectively utilizing data lineage and metadata within ELT pipelines. It fosters a culture of transparency, accountability, and value realization within organizations. Organizations empower stakeholders across the business, IT, and compliance functions. It democratizes access to lineage information and metadata annotations to make informed decisions, drive innovation, and extract actionable insights from their data assets.

  • Transparency and accountability: Data lineage transparency enables stakeholders to understand the provenance and lineage of data assets. It fosters trust and accountability across the organization. By providing visibility into data flows, transformations, and usage. ELT pipelines equipped with robust lineage capabilities enhance transparency, facilitate collaboration, and support regulatory compliance efforts.
  • Value realization and data monetization: Data lineage and metadata management lay the foundation for unlocking new revenue streams and driving value from data assets. By enriching data assets with metadata annotations that capture business context, semantics, and usage patterns. Organizations can identify opportunities for data monetization, product innovation, and customer engagement, maximizing the return on their data investments.

Conclusion

In data engineering, ELT pipelines represent the linchpin of modern data architectures, facilitating vast datasets’ movement, transformation, and analysis. However, the efficacy of ELT pipelines hinges upon robust data lineage and metadata management practices. By embracing a holistic approach to lineage tracing and metadata governance. Organizations can unlock new dimensions of data transparency, accountability, and value realization in their ELT initiatives.

OpenAI is training GPT-4’s successor. Here are 3 big upgrades to expect from GPT-5

OpenAI ChatGPT GPT-4o

Even though OpenAI's most recently launched model, GPT-4o, significantly raised the ante on large language models (LLMs), the company is already working on its next flagship model, GPT-5.

Also: How to use ChatGPT Plus: From GPT-4o to interactive tables

Leading up to the spring event that featured GPT-4o's announcement, many people hoped the company would launch the highly anticipated GPT-5. To curtail the speculation, CEO Sam Altman even posted on X, "not gpt-5, not a search engine."

Now, just two weeks later, in a blog post unveiling a new Safety and Security Committee formed by the OpenAI board to recommend safety and security decisions, the company confirmed that it is training its next flagship model, most likely referring to GPT-4 successor's, GPT-5.

"OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI [artificial general intelligence]," said the company in a blog post.

Although it may be months if not longer before GPT-5 is available for customers — LLMs can take a long time to be trained — here are some expectations of what OpenAI's next-gen model will be able to do, ranked from least exciting to most exciting.

Better accuracy

Following past trends, we can expect GPT-5 to become more accurate in its responses — because it will be trained on more data. Generative AI models like ChatGPT work by using their arsenal of training data to fuel the answers they provide. Therefore, the more data a model is trained on, the better the model's ability to generate coherent content, leading to better performance.

Also: How to use ChatGPT to make charts and tables with Advanced Data Analysis

With each model released thus far, the training data has increased. For example, reports said GPT-3.5 was trained on 175 billion parameters while GPT-4 was trained on one trillion. We will likely see an even bigger jump with the release of GPT-5.

Increased multimodality

When predicting GPT-5's capabilities, we can look at the differences between every major flagship model since GPT-3.5, including GPT-4 and GPT-4o. With each jump, the model became more intelligent and boasted many upgrades, including price, speed, context lengths, and modality.

GPT-3.5 can only input and output text. With GPT-4 Turbo, users can input text and image inputs to get text outputs. With GPT-4o, users can input a combination of text, audio, image, and video and receive any combination of text, audio, and image outputs.

Also: What does GPT stand for? Understanding GPT-3.5, GPT-4, GPT-4o, and more

Following this trend, the next step for GPT-5 would be the ability to output video. In February, OpenAI unveiled its text-to-video model Sora, which may be incorporated into GPT-5 to output video.

Ability to act autonomously (AGI)

There is no denying chatbots are impressive AI tools capable of helping people with many tasks, including generating code, Excel formulas, essays, resumes, apps, charts and tables, and more. However, we have been seeing a growing desire for AI that knows what you want done and can do it with minimal instruction — artificial general intelligence, or AGI.

With AGI, users would ask the agent to accomplish an end goal, and it would be able to produce the result by reasoning what needs to be done, planning how to do it, and carrying the task out. For example, in an ideal scenario where GPT-5 had AGI, users would be able to request a task such as "Order a burger from McDonald's for me," and the AI would be able to complete a series of tasks that include opening the McDonald's site, and inputting your order, address, and payment method. All you'd have to worry about is eating the burger.

Also: What is artificial general intelligence really about? Conquering the last leg of the AI arms race

The Rabbit R1 startup is trying to accomplish the same goal, creating a gadget that can use agents to create a frictionless experience with tasks in the real world, such as booking an Uber or ordering food. The device has sold out multiple times despite not being able to carry out the more advanced tasks mentioned above.

As the next frontier of AI, AGI can completely upgrade the type of assistance we get from AI and change how we think of assistants altogether. Instead of relying on AI assistants to tell us, say, how the weather is, they will be able to help accomplish tasks for us from start to finish, which — if you ask me — is something to look forward to.

Artificial Intelligence

A Look at Kwaai’s Personal AI OS

Interview with Kwaai’s Toby Morning and Karsten Wade

A Look at Kwaai’s Personal AI OS

In the latest episode of the AI Think Tank podcast, I had the privilege of hosting Toby Morning and Karsten Wade from Kwaai. These two trailblazers are at the forefront of a nonprofit initiative dedicated to democratizing AI through the development of a Personal AI Operating System (pAIOS). Our discussion spanned various topics, from the origins of Kwaai to the technical intricacies of pAIOS, providing a comprehensive insight into their groundbreaking work. This article delves into the key points and takeaways from our enlightening conversation.

Setting the Stage: An Introduction to Kwaai

Kwaai, a nonprofit organization driven by volunteers and backed by notable sponsors, is making significant strides in the AI landscape. The organization’s mission is to bring AI to the masses through an open-source platform, ensuring that everyone has access to the benefits of AI. Both Toby and Karsten play pivotal roles in this initiative. Toby’s extensive background in community building and open-source advocacy, combined with Karsten’s experience in technology and community architecture, make them ideal leaders for this ambitious project.

The Vision of pAIOS

Karsten Wade eloquently likened the current technological advancements to the Gutenberg Press, emphasizing the historical significance of their work. He joined Kwaai in December, drawn by the innovative ideas and the nonprofit’s commitment to sustainability and community-driven efforts. Karsten highlighted that the development of pAIOS is not just about technology but also about creating an organization that is independent of corporate control and dedicated to the community’s needs.

“We’re at one of those big sea-change points in human history,” Karsten remarked, underlining the transformative potential of pAIOS.

Origins and Growth of Kwaai

Toby shared his journey with Kwaai, which began even before it became a nonprofit. His long-standing relationship with Reza, Kwaai’s founder, played a significant role. Toby saw the potential in creating events that bring developers and community members together, thereby building a supportive ecosystem for pAIOS.

“I wanted to help bridge the gap for non-technical founders, enabling them to create product requirement documents or MVPs with the help of AI,” Toby explained, reflecting on his mentoring experience at TechStars and how it aligned with Kwaai’s mission.

The Community-Driven Approach

Kwaai’s approach is deeply rooted in community engagement. The organization’s structure is designed to ensure that it remains beholden to the people rather than corporations. This philosophy is reflected in their open-source model and the inclusive nature of their projects. Toby and Karsten emphasized the importance of building a robust community around pAIOS, which includes developers, mentors, sponsors, and users from diverse backgrounds.

The Role of Hackathons

One of the central strategies for community engagement and development at Kwaai is the organization of hackathons. These events serve as platforms for collaboration, innovation, and growth. They also help attract sponsors, raise funds, and build awareness about pAIOS.

“From the very beginning, we saw hackathons as a way to bring people together and raise attention,” Karsten said. “The after-party call was literally the hackathon,” Toby added, illustrating how organic and community-driven these events are.

Building a Supportive Ecosystem

Hackathons are not just about coding; they are about building a community. These events provide a space for developers to collaborate, share ideas, and work on projects that contribute to the larger goal of democratizing AI. The hackathons also play a crucial role in attracting sponsors and raising funds, which are essential for the sustainability of the nonprofit.

The list of sponsors supporting Kwaai is impressive and diverse, including names like Arcus Nexus, Chase, Salesforce, Wiley, and many others. This broad base of support underscores the widespread belief in Kwaai’s mission and the potential impact of pAIOS.

The Importance of Sponsorship

Sponsorships are vital for the sustainability and growth of Kwaai. They provide the necessary resources to organize events, develop technology, and engage with the community. Toby and Karsten emphasized that while financial support is important, the primary metric of success for Kwaai is community engagement.

“The number one metric that we have from the board is community engagement and not sponsor dollars,” Toby explained. “Don’t get me wrong, I’ll take a dollar or I’ll take a million. But our mandate as an organization is to be for the people, and that means community first.”

A Look at Kwaai’s Personal AI OS

Notable Sponsors and Their Contributions

Kwaai has garnered support from a diverse array of sponsors. These include major corporations, educational institutions, and community organizations. Some notable sponsors mentioned during the podcast include:

  • Arcus Nexus: A platinum sponsor providing significant financial support.
  • Chase and Salesforce: Both companies have contributed to the development and sustainability of pAIOS.
  • Wiley: An educational publisher supporting Kwaai’s mission.
  • SoCal Linux Expo and AI4: These organizations are helping to promote the hackathons and provide resources for participants.
  • AI Think Tank Podcast: Outreach
pAI Palooza: A Nationwide Tour

One of the most exciting initiatives discussed was pAI Palooza, a series of hackathons across various cities starting June 21st in Los Angeles. This initiative aims to bring together local AI startups, developers, and community members to showcase and build upon the pAIOS platform.

The Goals of pAI Palooza

pAI Palooza is designed to foster innovation and community engagement. By organizing hackathons in different cities, Kwaai hopes to reach a wider audience and build a diverse community of developers and users. The events will also serve as a platform for local AI startups to showcase their work and connect with potential mentors and sponsors.

Technical Insights into pAIOS

Our discussion also delved into the technical aspects of pAIOS. Toby and Karsten provided insights into the architecture and design of the system, emphasizing the importance of privacy and security.

Privacy and Security

Privacy is a major concern in the development of pAIOS. The system is designed to ensure that users have full control over their data. Kwaai is implementing a community-driven network for data inference, which allows users to process their data locally without relying on external cloud services.

“Using solid protocol for privacy and creating a community-driven network for data inference is game-changing,” Toby noted.

Open-Source Architecture

pAIOS is built on an open-source architecture, which ensures transparency and flexibility. Users can choose their preferred AI models and integrate various components seamlessly. This modular approach makes pAIOS a versatile platform that can adapt to different needs and preferences.

The Importance of Community-Driven Development

The development of pAIOS is a collaborative effort involving contributions from developers worldwide. Karsten highlighted the importance of community-driven development in ensuring that the system remains open, flexible, and responsive to the needs of its users.

Accessibility and Inclusivity

A significant part of our conversation focused on ensuring accessibility and inclusivity in AI development. Toby stressed the need for involving diverse groups, including people with disabilities, in the development process to create a truly inclusive AI ecosystem.

“Accessibility is a core part of pAIOS. It can significantly help people with disabilities, making technology more inclusive and beneficial for everyone,” Toby said.

Engaging Underrepresented Groups

Kwaai is actively working to engage underrepresented groups in AI development. This includes organizing events and hackathons that are accessible to everyone, regardless of their technical background. Toby and Karsten emphasized the importance of creating an inclusive environment where everyone feels welcome and valued.

Building a Strong Community

Kwaai’s community-driven approach ensures that the development of pAIOS is guided by the needs and preferences of its users. This collaborative effort helps create a system that is responsive, flexible, and inclusive. Toby and Karsten highlighted the importance of building strong relationships within the community to foster innovation and growth.

The Benefits of Community Engagement

Community engagement provides numerous benefits for both developers and users. It creates opportunities for learning, collaboration, and networking. It also helps build a supportive environment where everyone can contribute to the development of pAIOS and benefit from its advancements.

The Future of pAIOS

The future of pAIOS looks promising, with numerous initiatives and projects in the pipeline. Toby and Karsten shared their vision for the continued growth and development of the system, emphasizing the importance of community involvement and support.

Upcoming Projects and Initiatives

Kwaai has several exciting projects and initiatives planned for the future. These include expanding the pAI Palooza tour, developing new features for pAIOS, and engaging with more communities and organizations. Toby and Karsten encouraged everyone to get involved and contribute to the ongoing development of pAIOS.

The Importance of Continued Support

The continued support of the community is crucial for the success of pAIOS. Toby and Karsten emphasized that everyone has a role to play in the development of the system, whether as a developer, mentor, sponsor, or user. They encouraged everyone to join the Kwaai community and contribute in any way they can.

Conclusion

Toby and Karsten provided a comprehensive insight into the groundbreaking work being done by Kwaai. The development of pAIOS represents a significant step towards democratizing AI and making its benefits accessible to everyone. The community-driven approach, emphasis on privacy and security, and commitment to inclusivity and accessibility make pAIOS a truly transformative project.

Kwaai and the development of pAIOS are paving the way for a future where AI is accessible, inclusive, and beneficial for all. The journey ahead is filled with opportunities for learning, collaboration, and making a positive impact on society. For more details and to get involved, visit pAI Palooza and join the Kwaai community. Let’s work together to shape the future of AI, one hackathon at a time.

Resources:

pAIOS GitHub

Kwaai AI Lab GitHub

Gravity AI

Hackathon Registration Calendar

New England Research Cloud

Summer Of Code

Urban Tech

Join us as we continue to explore the cutting-edge of AI and data science with leading experts in the field.

Subscribe to the AI Think Tank Podcast on YouTube. Would you like to join the show as a live attendee and interact with guests? Contact Us

Celoxis: Project Management Software Is Changing Due to Complexity and New Ways of Working

Project management systems have become cornerstone tools for organisations in APAC to navigate the fast pace of business changes. Whether it’s a cloud migration or a system implementation, IT teams in particular desperately need PM systems to get things done.

APAC-headquartered Celoxis is one project portfolio management software firm that is seeing this change. Head of Customer Success at Celoxis Ratnakar Gore said project management is dealing with challenges that include more complex projects, remote working teams and a shift to agile working.

He said project management systems like Celoxis are changing with these developments through measures like integrating team collaboration tools and social media and supporting operations with resource planning visualisation or improvement of interoperability with other systems.

Visit Celoxis

What challenges are the project management discipline facing?

The project management discipline has changed since Celoxis launched in the late 90s. Gore said that initially, the firm was established due to a perceived need among organisations for more flexibility and scalability to address project management and collaboration challenges across workforces. Since then, the firm has grown out of India and into global markets.

VIDEO: Watch our introduction to Celoxis’ project management capabilities

APAC has also seen the birth of other project management software vendors like Australia’s Atlassian, and a new influx of cloud vendors like monday.com and ClickUp are making their presence known in the market.

Gore said project management is facing a number of challenges and opportunities.

The increasing complexity of portfolios and projects

Projects have become larger and interconnected across geographies. “This is something all organisations are dealing with. The interconnected global markets, advancements in technology, and diverse project requirements make large-scale initiatives challenging.”

The rise of remote and hybrid work and collaboration

Global events like the COVID-19 pandemic have caused an increasing shift towards remote and hybrid work, including in APAC. Gore said, “This has added challenges for team collaboration, interpersonal communication, and team dynamics” to facilitate remote project management.

Efficient allocation and management of resources

Recent tighter economic conditions have meant a focus on efficient allocation and management of resources, including human and non-human assets. “Amid stringent budget constraints this is a critical challenge. Project success is a function of optimising resource utilisation,” Gore said.

Moving from waterfall to agile project management

There is a growing preference for iterative agile project management methodologies over traditional waterfall-style project management, with Gore saying a number of businesses see this as a way to achieve “enhanced adaptability and responsiveness.”

However, the transition, which has been embraced in markets in APAC like Australia, requires a “thought-process shift.” He added, “A number of organisations struggle with the aspects related to change management associated with adopting agile implementation frameworks.”

Aligning project portfolios with business strategy

There is a general move to ensure project portfolios contribute to overall business objectives, which Gore said is “always challenging.” He said, “Maintaining a clear link between projects and strategic goals require effective communication and alignment across all levels of the organization.”

Implementing project management tools and processes

These changes have required organisations to implement new project management tools and processes, which has its own challenges. “Traditionally, there is resistance to change, and most companies work towards managing this resistance to ensure a smooth transition,” Gore said.

How are project management software vendors responding?

Project management software providers are continuing to evolve products to account for new challenges. Gore said that as the business environment shifts and technologies like AI take off, IT managers will witness a number of changes in technologies in the APAC market.

The expansion of systems to include managerial roles

In the past, software project management was handled by an IT project manager or an overall project manager with operational and project responsibilities. Gore said competition is driving software to incorporate more diverse roles and functions from an organisation’s workforce.

SEE: Top 10 best project management software and tools in 2024

Organisations are now seeing “departments or teams with no traditional and active engagement in project management software solutions” invited into the user ecosystem, he said, as organisations seek to align stakeholders with the aims and goals of the organisation.

The prioritisation of resource management visibility

Project management tools offer resource management features with centralised scheduling. “This is enabling a team view of the pending work and available capacity. Data-driven decision-making is able to enhance project portfolios and resource allocations,” Gore said.

The incorporation and utilisation of AI and automation

The AI boom is impacting the project management software market. Gore said there is a growing trend towards incorporating AI and automation features “to streamline repetitive tasks, optimise resource allocation, and provide intelligent project performance insights.”

Enhancement of the user interface and user experience

Systems that are difficult and unattractive to use are less effective in projects, and system skills sometimes need to be acquired fast. Gore said intuitive UI and UX are “gaining a lot of emphasis, with a greater focus on catering to users with varying levels of technical expertise.”

The foundations of data security and compliance

Project management vendors are prioritising data security and compliance. “As more sensitive data gets stored and managed within the various project management tools, there is a greater focus on data security and compliance with regulations such as GDPR,” Gore said.

DOWNLOAD: This GDPR Security Pack from TechRepublic Premium

Project management tools are getting more social

Social media platforms have pervaded the lives of workforces in the consumer market, and this is also influencing project management tools. Gore said that, though this is a developing area, “social media platforms will play a part in project management software in some form or shape.”

The centralisation of team collaboration for projects

The requirement for teams to collaborate across locations, particularly with more remote work across APAC, is requiring more in-the-moment collaboration and team communication within project platforms. This is driving demand for the likes of real-time messaging functionality.

SEE: TechRepublic’s review of Celoxis features, pricing and alternatives

The move to cloud project management software tools

There has been a notable rise in cloud-native SaaS project management solutions. Gore said that “current market trends indicate the rise of more project management software providers making their systems available on the cloud to support collaboration and scheduling.”

The requirement for more integration and compatibility

Silos are the enemy of good project management. Systems are trying to expand integration capabilities, including into new business domains. “There is a focus on seamless integration with other business applications, for a connected business operations scenario,” Gore said.

How is Celoxis responding to the changing market?

Celoxis offers an “all-in-one” portfolio and project management software solution. It includes advanced scheduling, gantt charts, risk management, time tracking, Kanban project planning, issue tracking, a reporting and dashboard engine and an innovative client portal.

Celoxis offers a fully featured portfolio and project management option.
Celoxis offers a fully featured portfolio and project management option. Image: Celoxis

Gore said market trends and opportunities are seeing the firm focus on ensuring integration and compatibility with other platforms and devices, building in real-time messaging to facilitate collaboration, improving user experiences and enhancing data security and compliance.

Visit Celoxis

NinjaTech AI Teams Up With AWS to Launch the Next Generation of AI Agents and Copilots

NinjaTech AI, a San Francisco-based startup that offers conversational artificial intelligence for business, has announced the public beta launch of its new AI agent service Ninja. Powered by Amazon Web Services (AWS) purpose-built machine learning (ML) chips, Ninja goes beyond traditional AI assistants and co-pilots by bringing autonomous agents to market.

Ninja has the ability to plan and execute real-world tasks asynchronously, such as scheduling meetings on the user’s behalf, helping out with coding tasks, or conducting in-depth research. The AWS chips Trainium and Inferentia2, along with Amazon SageMaker, a fully managed cloud-based ML service, enable Ninja to create, train, and scale custom AI agents capable of autonomously managing complex tasks.

Leveraging AWS’s cloud capabilities, Ninja users can assign new tasks without waiting for ongoing tasks to be completed. The service will ping when it completes an operation or there is question that needs input. Users can also view the progress of each task by clicking a sidebar.

"Working with AWS's Annapurna Labs has been a genuine game-changer for NinjaTech AI. The power and flexibility of Trainium & Inferentia2 chips for our reinforcement-learning AI agents far exceeded our expectations: They integrate easily and can elastically scale to thousands of nodes via Amazon SageMaker," stated Babak Pahlavan, founder and CEO of NinjaTech AI.

Pahlavan further shared that these new AWS-designed chips can save up to 80% in total costs, and are 60% more energy efficient compared to similar GPUs on the market. The chips also offer native support for the larger 70B variants of the latest open-source models like Meta’s Llama 3.

AI agents rely on LLMs that are tailored to specific tasks through extensive training and fine-tuning, such as reinforcement learning. However, the scarcity and inelasticity of GPUs required for training LLMs have emerged as a key challenge for the successful deployment of AI models. AWS offers a solution by providing a combination of custom chip technology and scalable cloud infrastructure.

(Ole.CNX/Shutterstock)

The four key conversation AI agents offered by Ninja include Real-Time Web Search, Ninja Coder, Basic Scheduler, And Ninja Advisor. Additionally, Ninja offers limited access to third-party LLMs enabling users to access a side-by-side results comparison from leading LLMs from companies such as Google, Anthropic, and OpenAI. The free version of Ninja will only offer a limited number of daily tasks. Users will have to subscribe to paid plans to access a higher number of tasks.

NinjaTech AI is now closer to its goal of providing a service for busy professionals to get the most out of their AI. The state-of-the-art asynchronous infrastructure allows Ninja users to manage a nearly infinite number of tasks simultaneously. This is like harnessing the power of multiple AI models without having to manually work with each model individually.

The launch of Ninja has the potential to unlock the next generation of productivity tools that can transform how we work, learn, and collaborate. It also underscores AWS's growing influence in the AI and ML market. NinjaTech AI is one of many companies that have chosen AWS Inferentia and AWS Trainium chips over NVIDIA GPUs.

Related Items

AI Chatbots: A Hedge Against Inflation?

OpenAI Rival Inflection AI Raises $1.3B to Enhance Its Pi Chatbot

University of Michigan is Developing an AI Coaching Bot For Students

IBM to test Southeast Asian LLM and facilitate localization efforts

bangkok4gettyimages-1499456004

IBM has inked an agreement with AI Singapore (AISG) to test the latter's Southeast Asian large language model (LLM) and make it available for developers to build customized artificial intelligence (AI) applications.

Under the partnership, IBM will test the Southeast Asian Languages in One Network (SEA-LION) model using Big Blue's AI technology and data platform, Watsonx, and work with AISG to fine-tune the LLM. The goal is to help organizations choose suitable AI models for their business requirements, IBM and AISG said in a joint statement on Tuesday.

Also: Google joins collaborative efforts to build localized large language models

IBM will also make SEA-LION available in its AI use case library, dubbed Digital Self-Serve Co-Create Experience (DSCE), enabling developers and data scientists to build localized generative AI (GenAI) applications.

An open-source LLM developed by AISG, SEA-LION is designed to be smaller, more flexible, and faster than other LLMs, according to AISG. Its current iteration runs on two base models: a 3-billion-parameter model and a 7-billion-parameter model. The LLM's training data is composed of 981 billion language tokens, which AISG defines as fragments of words created from breaking down text during the tokenization process. These fragments include 623 billion English tokens, 128 billion Southeast Asia tokens, and 91 billion Chinese tokens.

With SEA-LION, Singapore aims to drive the development of LLMs that better reflect Southeast Asia's societal mix and exhibit stronger contextual understanding of the region's cultures and languages.

The partnership aims to push forward a "custom-made foundation model" for Southeast Asia and made by Southeast Asians, according to Leslie Teo, AISG's senior director of AI products. The two organizations will also look to build use cases, fuel SEA-LION's adoption, and help organizations "scale AI safely and responsibly," Teo said.

The collaboration encompasses efforts to incorporate AI governance into SEA-LION, so businesses can better navigate compliance, risk management, and model lifecycle management, even as government regulations on AI continue to evolve.

"[IBM] believes further progress of GenAI will bring greater performance in smaller language models, with users given the opportunity to personalize models based on their business and industry requirements," Catherine Lian, IBM Asean's general manager and technology leader, said in a statement.

Also: Generative AI may be creating more work than it saves

"No one model is a one-size-fits-all for businesses, and organizations must be empowered with a choice to use their models based on their needs," Lian said. "[The] SEA-LION LLM is a big step forward in creating an open AI system and addressing the Asean language challenges that companies and governments face when working with AI."

In March, AISG also announced a partnership with Google to enhance datasets used to train, fine-tune, and assess AI models in languages specific to Southeast Asia. Called Project Southeast Asian Languages in One Network Data, the initiative aims to "improve cultural context awareness" in LLMs built for the region.

Initially, the project will focus on Indonesian, Thai, Tamil, Filipino, and Burmese — languages for which AISG and Google will develop translocalization and translation models. They will also build tools to help scale translocalization capabilities, share best practices for tuning datasets, and publish pre-training guides for Southeast Asian languages.

Artificial Intelligence

Brain science: Mind mechanism of AI safety, interpretability and regulation

Big data and artificial intelligence concept. Human brain glowing from processor, symbolizing the fusion of human intelligence and machine learning capabilities. Evolution of technology of data.

The basis of how the human brain works is conceptually the mechanism of the mind—which is the electrical and chemical signals of neurons, in sets, with their interactions and features.

Recently, the Department of Commerce released a Strategic Vision on AI Safety, stating that, “The U.S. AI Safety Institute will focus on three key goals: Advance the science of AI safety; Articulate, demonstrate, and disseminate the practices of AI safety; and Support institutions, communities, and coordination around AI safety.”

In a publication on AI interpretability, Mapping the Mind of a Large Language Model, Anthropic wrote, “We successfully extracted millions of features from the middle layer of Claude 3.0 Sonnet, (a member of our current, state-of-the-art model family, currently available on claude.ai), providing a rough conceptual map of its internal states halfway through its computation. This is the first ever detailed look inside a modern, production-grade large language model. For example, amplifying the “Golden Gate Bridge” feature gave Claude an identity crisis even Hitchcock couldn’t have imagined: when asked “What is your physical form?”, Claude’s usual kind of answer is – “I have no physical form, I am an AI model” – changed to something much odder: “I am the Golden Gate Bridge… my physical form is the iconic bridge itself…”. Altering the feature had made Claude effectively obsessed with the bridge, bringing it up in answer to almost any query—even in situations where it wasn’t at all relevant. “

What should be the basis—or neuroscience—of AI safety? The brain or the mind?

Artificial neural networks are digital simulations of biological neural networks. However, with several neuroimaging techniques, fMRI, EEG, electron microscopes, CT, PET, and others, the brain is yet to be fully understood for numerous mental states.

Although the anatomy of neurons is delineated with correlations to physiology, neurons—conceptually—are not the human mind. It is theorized that the human mind has functions and features. Functions arise from interactions of electrical and chemical signals, in sets. They include memory, feelings, emotions, and modulation of internal senses. Features qualify or grade the functions. Simply, features place what functions do in any instance. They include attention, awareness [or less than attention], self or subjectivity, and intent or free will. Sets of electrical and chemical signals are conceptually obtained in clusters of neurons—across the central and peripheral nervous systems.

Though Anthropic labeled features as “matching patterns of neuron activations, to human-interpretable concepts”, in the human mind, features, and functions are not the same.

In the human mind, the Golden Gate Bridge is a memory, which can be qualified by attention, awareness, self or intent. Anthropic noted that “Looking near a “Golden Gate Bridge” feature, we found features for Alcatraz Island, Ghirardelli Square, the Golden State Warriors, California Governor Gavin Newsom, the 1906 earthquake, and the San Francisco-set Alfred Hitchcock film Vertigo. Looking near a feature related to the concept of “inner conflict”, we find features related to relationship breakups, conflicting allegiances, and logical inconsistencies, as well as the phrase “catch-22” This shows that the internal organization of concepts in the AI model corresponds, at least somewhat, to our human notions of similarity. “

In the human mind, conceptually, there are thick sets and thin sets. Thick sets [of electrical and chemical signals] collect whatever is similar between information, leaving thin sets with whatever is unique. This means that several interpretations in the human mind are possible by thick sets of signals—doors, windows, chairs, desks and others.

Thick and thin sets are also qualifiers [or features] on the mind. Thick sets broadly explain what is referred to as associative memory, concepts, and categories. There are qualifiers on the mind like sequences, which can be old or new, principal spots, splits, and others.

Anthropic wrote, “The features we found represent a small subset of all the concepts learned by the model during training, and finding a full set of features using our current techniques would be cost-prohibitive (the computation required by our current approach would vastly exceed the compute used to train the model in the first place). Understanding the representations the model uses doesn’t tell us how it uses them; even though we have the features, we still need to find the circuits they are involved in. And we need to show that the safety-relevant features we have begun to find can actually be used to improve safety.”

AI does not have emotions or feelings, but it has memory—or it uses digital memory. All the memory that is available to AI is never brought to attention at once, just like the human memory, showing that the memory gets qualified or graded. Some of the qualifiers of human memory are similar to those used by large language models, though they might have additional variations.

The goal is to seek out what qualifiers work for LLMs, and to explain how they come by their outputs, either positive or not. It is possible to draw from the qualifiers of the human mind, for how order is established and what to seek.

When Anthropic tweaked a feature, it answered, “I am the Golden Gate Bridge… my physical form is the iconic bridge itself…”.

Something like this is possible in the human mind—with neural probes, certain mental conditions and some psychoactive substances. If it were the mind, it would be a problem of distribution [a qualifier], where, rather than discuss the bridge as something else in the memory, it instead personalized it, since there was a cut from that distribution of being [for the awareness of self and things]. So, the memory was not used as something the self knows, it made it what the self is.

Guardrails are already shaping what some AI can output—or not, but even the mechanics of guardrails can be defined by the human mind, like what not to give attention to, what to be aware of, and what to use its sub-intent to evade.

How the electrical and chemical signals of neurons interact is postulated by the action potentials—neurotransmitters theory of consciousness.

The qualifiers of the human memory can then be used to seek out explainable artificial intelligence, for what to look for, not just to find “a full set of features using their current techniques.”

Anthropic hires former OpenAI safety lead to head up new team

Anthropic Claude logo

Jan Leike, a leading AI researcher who earlier this month resigned from OpenAI before publicly criticizing the company’s approach to AI safety, has joined OpenAI rival Anthropic to lead a new “superalignment” team.

In a post on X, Leike said that his team at Anthropic will focus on various aspects of AI safety and security, specifically “scalable oversight,” “weak-to-strong generalization” and automated alignment research.

I'm excited to join @AnthropicAI to continue the superalignment mission!
My new team will work on scalable oversight, weak-to-strong generalization, and automated alignment research.
If you're interested in joining, my dms are open.

— Jan Leike (@janleike) May 28, 2024

A source familiar with the matter tells TechCrunch that Leike will report directly to Jared Kaplan, Anthropic’s chief science officer, and that Anthropic researchers currently working on scalable oversight — techniques to control large-scale AI’s behavior in predictable and desirable ways — will move to report to Leike as Leike’s team spins up.

✨🪩 Woo! 🪩✨
Jan's led some seminally important work on technical AI safety and I'm thrilled to be working with him! We'll be leading twin teams aimed at different parts of the problem of aligning AI systems at human level and beyond. https://t.co/aqSFTnOEG0

— Sam Bowman (@sleepinyourhat) May 28, 2024

In many ways, Leike’s team sounds similar in mission to OpenAI’s recently-dissolved Superalignment team. The Superalignment team, which Leike co-led, had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years, but often found itself hamstrung by OpenAI’s leadership.

Anthropic has often attempted to position itself as more safety-focused than OpenAI.

Anthropic’s CEO, Dario Amodei, was once the VP of research at OpenAI, and reportedly split with OpenAI after a disagreement over the company’s direction — namely OpenAI’s growing commercial focus. Amodei brought with him a number of ex-OpenAI employees to launch Anthropic, including OpenAI’s former policy lead Jack Clark.

How to use ChatGPT to make charts and tables with Advanced Data Analysis

colorful chart

Know what floats my boat? Charts and graphs.

Give me a cool chart to dig into and I'm unreasonably happy. I love watching the news on election nights, not for the vote count, but for all the great charts. I switch between channels all evening to see every possible way that each network finds to present numerical data.

Is that weird? I don't think so.

Also: The moment I realized ChatGPT Plus was a game-changer for my business

As it turns out, ChatGPT does a great job making charts and tables. And given that this ubiquitous generative AI chatbot can synthesize a ton of information into something chart-worthy, what ChatGPT gives up in pretty presentation it more than makes up for in informational value.

It should come as no surprise to anybody that AI chatbots' feature sets are changing constantly. As of the time of this update (end of May, 2024), OpenAI has just come out with a Mac application and has release its GPT-4o LLM, which is available for both free and paying customers. The GPT-4o version that comes for the added-price Plus version is supposed to have interactive chart features and the ability to interact with the engine longer per session.

But, not so much. My free account doesn't offer GPT-4o at all yet. It hasn't rolled out to all free accounts yet. And while paid ChatGPT Plus plan does provide the interactive charts feature in Chrome and Safari, it doesn't in the Mac app.

Also: ChatGPT vs. ChatGPT Plus: Is a paid subscription still worth it?

This article was last updated when the Advanced Data Analysis features (which included charts) were only available to Plus customers. Even though some of those features are supposed to be available to free customers, since my free account doesn't have them yet, I'm going to present the rest of this article as if the charting features are only available to Plus customers. If you're a free customer and you have GPT-4o, feel free to try some of the prompts. Those features may work for you, and undoubtely will as we move forward in time.

Advanced Data Analysis produces relatively ugly charts. But it rocks. First, let's discuss where ChatGPT gets its data, then we'll make some tables.

How to use ChatGPT to make charts and tables

List the top five cities in the world by population. Include country.

I asked this question to ChatGPT's free version and here's what I got back:

Turning that data into a table is simple. Just tell ChatGPT you want a table:

Make a table of the top five cities in the world by population. Include country.

Make a table of the top five cities in the world by population. Include country and a population field

You can also specify certain details for the table, like field order and units. Here, I'm moving the country first and compressing the population numbers.

Make a table of the top five cities in the world by population. Include country and a population field. Display the fields in the order of rank, country, city, population. Display population in millions (with one decimal point), so 37,833,000 would display as 37.8M.

Note that I gave the AI an example of how I wanted the numbers to display.

That's about as far as the free version will take us. From now on, we're switching to the $20/month ChatGPT Plus version.

In this example, we're just going to make a simple bar chart.

Make a bar chart of the top five cities in the world by population

Chatty little tool, isn't it?

The eagle-eyed among you may have noticed the discrepancy in populations between the previous table shown and the results here. Notice that the table has a green icon and this graph has a purple icon. We've jumped from GPT-3.5 (the free version of ChatGPT) to GPT-4 (in ChatGPT Plus). It's interesting that the differing LLMs have slightly different data. This difference is all part of why it pays to be careful when using AIs, so double-check your work. In our case, we're just demonstrating charts, but this is a tangible example of where confidently presented data can be wrong or inconsistent.

The dataset I chose for this article is readily available from a government site, so you can replicate this experiment on your own. There are a ton of great datasets available on Data.gov, but I found that many are far too large for ChatGPT to use.

Also: How to use ChatGPT to create an app

Once I downloaded this one, I realized it also included information on ethnicity, so we can run a number of different charts from the same dataset.

Click the little upload button and then tell it the data file you want to import.

I asked it to show me the first five lines of the file so I'd know more about the file's format.

Create a pie chart showing gender as a percentage of the overall dataset

And here's the result:

Unfortunately, the dark shade of green makes the numbers difficult to read. Fortunately, you can instruct Advanced Data Analytics to use different colors. I was careful to choose colors that did not reinforce gender stereotypes.

Create a pie chart showing gender as a percentage of the overall dataset. Use light green for male and medium yellow for female.

Show the distribution of ethnicity in the dataset using a pie chart. Use only light colors.

And here's the result. Notice anything?

Apparently, New York didn't properly normalize its data. It used "WHITE NON HISPANIC" and "WHITE NON HISP" together, "BLACK NON HISPANIC" and "BLACK NON HISP" together, and "ASIAN AND PACIFIC ISLANDER" and "ASIAN AND PACI" together. This resulted in inaccurate representations of the data.

One benefit of ChatGPT is it remembers instructions throughout a session. So I was able to give it this instruction:

For all the following requests, group "WHITE NON HISPANIC" and "WHITE NON HISP" together. Group "BLACK NON HISPANIC" and "BLACK NON HISP" together. Group "ASIAN AND PACIFIC ISLANDER" and "ASIAN AND PACI". Use the longer of the two ethnicity names when displaying ethnicity.

And it replied:

Let's try the chart again, using the same prompt.

Show the distribution of ethnicity in the dataset using a pie chart. Use only light colors.

That's better:

You need to be diligent when looking at results. For example, in a request for top baby names, the AI separated out "Madison" and "MADISON" as two different names:

For all the following requests, baby names should be case insensitive.

For each ethnicity, present two pie charts, one for each gender. Each pie chart should list the top five baby names for that gender and that ethnicity. Use only light colors.

As it turns out, the chart generated text that was too small to read. So, to get a more useful chart, we can export it back out. I'm going to specify both file format and file width:

Export this chart as a 3000 pixel wide JPG file.

And here's the result:

Notice that Sofia and Sophia are very popular, but are shown as two different names. But that's what makes charts so fascinating.

FAQ

How much does it cost to use Advanced Data Analytics?

Advanced Data Analytics comes with ChatGPT Plus. Some of its features are available in GPT-4o for the free version of ChatGPT. ChatGPT Plus is $20/month. Advanced Data Analytics also is included with the Enterprise edition, but pricing for that hasn't been released yet.

Is the data uploaded to ChatGPT for charting kept private or is there a risk of data exposure?

Assume that there's always a privacy risk.

I asked this question to ChatGPT and this is what it told me:

Data privacy is a priority for ChatGPT. Uploaded data is used solely for the purpose of the user's current session and is not stored long-term or used for any other purposes. However, for highly sensitive data, users should always exercise caution and consider using the Enterprise version of ChatGPT, which offers enhanced data confidentiality.

Also: Generative AI brings new risks to everyone. Here's how you can stay safe

My recommendation: Don't trust ChatGPT or any generative AI tool. The Enterprise version is supposed to have more privacy controls, but I would recommend you only upload data that you won't mind finding its way to public visibility.

Can ChatGPT's Advanced Data Analysis handle real-time data or is it more suited for static datasets?

It's possible, but there are some practical limitations. First, the Plus account will throttle the number of requests you can make in a given period of time. Second, you have to upload each file individually. There is the possibility you could use a licensed ChatGPT API to do real-time analytics. But for the chatbot itself, you're looking at parsing data at rest.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Artificial Intelligence