Khazen

Robot Bees From MIT May Be The Pollinators Of The Future

By Steve Hanley from CleanTechnica —- Worried that alterations to the Earth’s climate may wipe out all the bees, leading to a steep decline in the availability of fruits and vegetables? Fear not. Researchers at MIT say they have successfully created robot bees that can do the job of real bees just as well and maybe better in some cases. […]

Read more
The Next Unicorn: The Road to AGI

By Malek el Khazen – edited text by OpenAI The pursuit of Artificial General Intelligence (AGI) demands a paradigm shift in how AI systems are trained and deployed. Simply adding more hardware to handle increasing computational demands has reached a point of diminishing returns. The next breakthrough will come from a unified platform that integrates […]

Read more
Abu Dhabi Launches International Carbon Measurement, Reporting and Verification Programme

Abu Dhabi Launches International Carbon Measurement, Reporting and Verification Programme The Environment Agency – Abu Dhabi (EAD) has launched an international standard carbon Measurement, Reporting, and Verification (MRV) programme to address carbon emissions and accelerate the emirate’s transition to a low-carbon economy. This move supports Abu Dhabi’s broader decarbonisation goals, including reducing carbon emissions by […]

Read more
UAE’s Kazar to Invest $2.5 Billion in Egypt’s 3.1GW Hybrid Renewable Power Plant

Emirati investment firm Kazar is investing $2.5 billion to build a hybrid renewable energy station in Egypt’s Zafarana region. The project, a collaboration with the Egyptian government, will deliver a total capacity of 3.1 gigawatts—2 GW from solar energy and 1.1 GW from wind energy—making it a key addition to Egypt’s renewable energy infrastructure. The station will operate […]

Read more
Summary of FY24: Bigger Doesn’t Mean Better – What About FY25?

Malek El Khazen Data, AI & IoT Cloud Solution Architect at Microsoft By Malek el Khazen – edited text by OpenAI In the AI world, the obsession with “bigger” has driven an arms race for larger models, faster chips, and sprawling data center setups. But bigger doesn’t mean better. The future will reward precision, efficiency, […]

Read more
Swallow this robot: Endiatx’s tiny pill examines your body with cameras, sensors

by venturebeat — In a development straight out of science fiction, Endiatx, a pioneering medical technology company, is making significant strides in bringing its robotic pill to market. The company’s CEO, Torrey Smith, recently sat down with VentureBeat to share exciting updates on their progress, nearly two years after our initial coverage of the startup’s ambitious vision. Founded in 2019, Endiatx has been steadily working towards realizing the fantastic voyage of miniaturized robots navigating the human body for diagnostic and therapeutic purposes. Their flagship product, the PillBot, is an ingestible robotic capsule equipped with cameras, sensors, and wireless communication capabilities, allowing doctors to examine the gastrointestinal tract with unprecedented precision and control.

In the interview, Smith revealed that Endiatx has raised $7 million in funding to date, with the largest investment of $1.5 million coming from Singapore-based Verge Health Tech Fund. This injection of capital has propelled the company forward, enabling them to refine their technology and conduct clinical trials. Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now “We’re currently in clinical trials with our pill bot technology,” Smith explained. “We’ll be starting pivotal trials at a leading U.S. medical institution in Q3/Q4.” Though Smith did not name the institution due to confidentiality agreements, he hinted that it is a renowned facility known for its expertise in gastroenterology. The PillBot has come a long way since its inception. The current prototype measures just 13mm by 30mm and boasts impressive capabilities. “It can transmit high-res video at 2.3 megapixels per second, and we have plans to quadruple that video quality soon,” Smith enthused. The CEO himself has played a vital role in testing, having swallowed 43 PillBots to date, including live on stage in front of a stunned audience.

Read more
Harvard, MIT, and Wharton research reveals pitfalls of relying on junior staff for AI training

by Michael Nuñez @MichaelFNunez – venturebeat –– As companies race to adopt artificial intelligence systems, conventional wisdom suggests that younger, more tech-savvy employees will take the lead in teaching their managers how to effectively use the powerful new tools. But a new study casts doubt on that assumption when it comes to the rapidly-evolving technology of generative AI.

The research, conducted by academics from Harvard Business School, MIT, Wharton, and other institutions in collaboration with Boston Consulting Group, found that junior employees who experimented with a generative AI system made recommendations for mitigating risks that ran counter to expert advice. The findings suggest that companies cannot rely solely on reverse mentoring to ensure the responsible use of AI. “Our interviews revealed two findings that run counter to the existing literature,” wrote the authors. “First, the tactics that the juniors recommended to mitigate their seniors’ concerns ran counter to those recommended by experts in GenAI technology at the time, and so revealed that the junior professionals might not be the best source of expertise in the effective use of this emerging technology for more senior members.”

Junior consultants struggle with AI risk mitigation in GPT-4 experiment The researchers interviewed 78 junior consultants in mid-2023 who had recently participated in an experiment giving them access to GPT-4, a powerful generative AI system, for a business problem-solving task. The consultants, who lacked technical AI expertise, shared tactics they would recommend to alleviate managers’ concerns about risks. But the study found the junior employees’ risk mitigation tactics were often grounded in “a lack of deep understanding of the emerging technology’s capabilities,” focused on changing human behavior rather than AI system design, and centered on project-level interventions rather than organization or industry-wide solutions.

Navigating the challenges of generative AI adoption in business

Read more
Five ways criminals are using AI Artificial intelligence has brought a big boost in productivity—to the criminal underworld.

 

by technologyreview.com — Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro. Most criminals are “not living in some dark lair and plotting things,” says Ciancaglini. “Most of them are regular folks that carry on regular activities that require productivity as well.” Last year saw the rise and fall of WormGPT, an AI language model built on top of an open-source model and trained on malware-related data, which was created to assist hackers and had no ethical rules or restrictions. But last summer, its creators announced they were shutting the model down after it started attracting media attention. Since then, cybercriminals have mostly stopped developing their own AI models. Instead, they are opting for tricks with existing tools that work reliably. That’s because criminals want an easy life and quick gains, Ciancaglini explains. For any new technology to be worth the unknown risks associated with adopting it—for example, a higher risk of getting caught—it has to be better and bring higher rewards than what they’re currently using.

Here are five ways criminals are using AI now.

Phishing

The biggest use case for generative AI among criminals right now is phishing, which involves trying to trick people into revealing sensitive information that can be used for malicious purposes, says Mislav Balunović, an AI security researcher at ETH Zurich. Researchers have found that the rise of ChatGPT has been accompanied by a huge spike in the number of phishing emails. Spam-generating services, such as GoMail Pro, have ChatGPT integrated into them, which allows criminal users to translate or improve the messages sent to victims, says Ciancaglini. OpenAI’s policies restrict people from using their products for illegal activities, but that is difficult to police in practice, because many innocent-sounding prompts could be used for malicious purposes too, says Ciancaglini. OpenAI says it uses a mix of human reviewers and automated systems to identify and enforce against misuse of its models, and issues warnings, temporary suspensions and bans if users violate the company’s policies. “We take the safety of our products seriously and are continually improving our safety measures based on how people use our products,” a spokesperson for OpenAI told us. “We are constantly working to make our models safer and more robust against abuse and jailbreaks, while also maintaining the models’ usefulness and task performance,” they added.

Read more
Apple announces new accessibility features, including Eye Tracking, Music Haptics, and Vocal Shortcuts

by apple.com — CUPERTINO, CALIFORNIA Apple today announced new accessibility features coming later this year, including Eye Tracking, a way for users with physical disabilities to control iPad or iPhone with their eyes. Additionally, Music Haptics will offer a new way for users who are deaf or hard of hearing to experience music using the Taptic Engine in iPhone; Vocal Shortcuts will allow users to perform tasks by making a custom sound; Vehicle Motion Cues can help reduce motion sickness when using iPhone or iPad in a moving vehicle; and more accessibility features will come to visionOS.

These features combine the power of Apple hardware and software, harnessing Apple silicon, artificial intelligence, and machine learning to further Apple’s decades-long commitment to designing products for everyone. “We believe deeply in the transformative power of innovation to enrich lives,” said Tim Cook, Apple’s CEO. “That’s why for nearly 40 years, Apple has championed inclusive design by embedding accessibility at the core of our hardware and software. We’re continuously pushing the boundaries of technology, and these new features reflect our long-standing commitment to delivering the best possible experience to all of our users.” “Each year, we break new ground when it comes to accessibility,” said Sarah Herrlinger, Apple’s senior director of Global Accessibility Policy and Initiatives. “These new features will make an impact in the lives of a wide range of users, providing new ways to communicate, control their devices, and move through the world.”

Eye Tracking Comes to iPad and iPhone

Powered by artificial intelligence, Eye Tracking gives users a built-in option for navigating iPad and iPhone with just their eyes. Designed for users with physical disabilities, Eye Tracking uses the front-facing camera to set up and calibrate in seconds, and with on-device machine learning, all data used to set up and control this feature is kept securely on device, and isn’t shared with Apple. Eye Tracking works across iPadOS and iOS apps, and doesn’t require additional hardware or accessories. With Eye Tracking, users can navigate through the elements of an app and use Dwell Control to activate each element, accessing additional functions such as physical buttons, swipes, and other gestures solely with their eyes.

Read more
From deepfakes to digital candidates: AI’s political play

by Gary Grossman/DALL-E  — venturebeat — AI is increasingly being used to represent, or misrepresent, the opinions of historical and current figures. A recent example is when President Biden’s voice was cloned and used in a robocall to New Hampshire voters. Taking this a step further, given the advancing capabilities of AI, what could soon be possible is the symbolic “candidacy” of a persona created by AI. That may seem outlandish, but the technology to create such an AI political actor already exists. There are many examples that point to this possibility. Technologies that enable interactive and immersive learning experiences bring historical figures and concepts to life. When harnessed responsibly, these can not only demystify the past but inspire a more informed and engaged citizenry.

People today can interact with chatbots reflecting the viewpoints of figures ranging from Marcus Aurelius to Martin Luther King, Jr., using the “Hello History” app, or George Washington and Albert Einstein through “Text with History.” These apps claim to help people better understand historical events or “just have fun chatting with your favorite historical characters.” Similarly, a Vincent van Gogh exhibit at Musée d’Orsay in Paris includes a digital version of the artist and offers viewers the opportunity to interact with his persona. Visitors can ask questions and the Vincent chatbot answers based on a training dataset of more than 800 of his letters. Forbes discusses other examples, including an interactive experience at a World War II museum that lets visitors converse with AI versions of military veterans.

The concerning rise of deepfakes Of course, this technology may also be used to clone both historical and current public figures with other intentions in mind and in ways that raise ethical concerns. I am referring here to the deepfakes that are increasingly proliferating, making it difficult to separate real from fake and truth from falsehood, as noted in the Biden clone example. Deepfake technology uses AI to create or manipulate still images, video and audio content, making it possible to convincingly swap faces, synthesize speech, fabricate or alter actions in videos. This technology mixes and edits data from real images and videos to produce realistic-looking and-sounding creations that are increasingly difficult to distinguish from authentic content.

While there are legitimate educational and entertainment uses for these technologies, they are increasingly being used for less sanguine purposes. Worries abound about the potential of AI-generated deepfakes that impersonate known figures to manipulate public opinion and potentially alter elections. The rise of political deepfakes Just this month there have been stories about AI being used for such purposes. Imran Khan, Pakistan’s former prime minister, effectively campaigned from jail through speeches created with AI to clone his voice. This was effective, as Khan’s party performed surprisingly well in a recent election. As written in The New York Times: “‘I had full confidence that you would all come out to vote. You fulfilled my faith in you, and your massive turnout has stunned everybody,’ the mellow, slightly robotic voice said in the minute-long video, which used historical images and footage of Mr. Khan and bore a disclaimer about its AI origins.”

Read more