Khazen

How Saudi Arabia is indigenizing the AI revolution and future-proofing its workforce

By Radwan Radwan — arabnews.com — JEDDAH: In the coming years, artificial intelligence technology is expected to transform economies, business practices and the way people live, work and consume. Conscious of these potentially momentous changes on the horizon, Saudi Arabia is pouring investments into AI research and development. The Kingdom launched its National Strategy for Data and Artificial Intelligence in October 2020 aimed at becoming a global leader in the field, as it seeks to attract $20 billion in foreign and local investments by 2030. Saudi Arabia is also determined to future-proof its workforce, for a start by training and developing a pool of 20,000 AI and data specialists. Riyadh’s adoption of digitalization and emerging technologies is forecast to contribute some 2.4 percent to its gross domestic product by 2030, according to a recent report by global consultancy firm PwC.

In terms of average annual growth in the contribution of AI by region, Saudi Arabia is expected to grab a 31.3 percent share in the technology’s expansion between 2018 and 2030, the PwC report added. “I believe that Saudi Arabia has a huge potential,” Ali Al-Moussa, a Saudi entrepreneur and AI expert, told Arab News. “Being in the field for years now, I saw a lot of smart, talented people who are able to compete with (others around) the globe to create great technologies, not only artificial intelligence, but everything from robotics to blockchain, you name it.” Saudi Arabia’s drive toward new technologies aligns with the objectives of the Vision 2030 social reform and economic diversification agenda, which aims to strengthen the Kingdom’s position as the regional leader in the field.

Read more
Melto D’Moronoyo: Cardinal Patriarch Rai Aussie visit will unite Maronite Eparchy

By Vanessa Boumelhem — catholicweekly.com — While millions of young people gathered around the pope in the streets of Lisbon, Portugal for World Youth Day, a similar gathering took place in the streets of Lebanon, where thousands of Catholic youth unable to be in Portugal came together for their own celebrations. The event was organised by 300 volunteers from all different Catholic rites under the youth committee of the Assembly of Catholic Patriarchs and Bishops of Lebanon. Under the World Youth Day theme, “Mary arose and went with haste,” Lebanese Catholic youth gathered for prayer, Eucharistic Adoration and catechesis, bringing forth a strong message of hope and faith in times of hardship and adversity. Though crisis-stricken and marred by worsening economic conditions, the youth were able to renew their hope and strengthen their faith, sending a message to Lebanese Catholics across the world.

This week, the Australian Maronite Catholic community welcomed our Patriarch, His Beatitude and Eminence, Patriarch Mar Bechara Boutros Cardinal Rai, Patriarch of Antioch and all the East, to Australia primarily to celebrate the Golden Jubilee Mass. He will also preside over the Sixth Congress of Maronite Bishops of Eparchies outside the Patriarchal Territory and General Superiors of Maronite Religious Orders. As the eparchy celebrates its golden jubilee, we recognise that this milestone has come as a result of the unwavering strength of the Lebanese Maronites, who have faced countless challenges in their past. Our eparchy was established after 100 years of Maronite presence in Australia, which grew prominent enough to support the need for Maronite priests and churches.

While the first Maronite Parish in Australia was established in 1897, the eparchy was not officially established until 1973. The Maronite community grew slowly in the early 1900s, but by the mid-1900s, emigration started again and the community began to grow rapidly, organising itself into village and family associations while assimilating into wider society. With this, the need for a life which better preserved and promoted Maronite Values and customs grew, eventually leading to the formation of the strong Maronite community in Australia today.

Read more
How much energy does AI use compared to humans? Surprising study ignites controversy

bt Bryson Masse – venturebeat — AI’s carbon footprint is no open-and-shut case, according to scientists from the University of California-Irvine and MIT, who published a paper earlier this year on the open access site arXiv.org that shakes up energy use assumptions of generative AI models, and which set off a debate among leading AI researchers and experts this past week. The paper found that when producing a page of text, an AI system such as ChatGPT emits 130 to 1500 times fewer carbon dioxide equivalents (CO2e) compared to a human.

The paper concludes that the use of AI has the potential to accomplish several significant activities with significantly lower emissions than humans. However, an ongoing dialogue among AI researchers reacting to the paper this week also highlights how accounting for interactions between climate, society, and technology poses immense challenges warranting continual reexamination. From blockchain to AI models, environmental effects need to be measured In an interview with VentureBeat, the authors of the paper, University of California at Irvine professors Bill Tomlinson and Don Patterson, and MIT Sloan School of Management visiting scientist Andrew Torrance, offered some insight into what they were hoping to measure.

Originally published in March, Tomlinson said that the paper was submitted to the research journal Scientific Reports where it is currently under peer review. The study authors analyzed existing data on the environmental impact of AI systems, human activities, and the production of text and images. This information was collected from studies and databases that study how AI and humans affect the environment. For example, they used an informal, online estimate for ChatGPT based on traffic of 10 million queries generating roughly 3.82 metric tons of CO2e per day while also amortizing the training footprint of 552 metric tons of CO2e. As well, for further comparison, they included data from a low impact LLM called BLOOM. On the human side of things, they used both examples of the annual carbon footprints of average persons from the US (15 metric tons) and India (1.9 metric tons) to compare the different per-capita effects of emissions over an estimated amount of time it would take to write a page of text or create an image.

Read more
Elon Musk’s Neuralink begins accepting human patients for trials of its brain implant

by Carl Franzen — venturebeat.com — Do you want to put an implant designed by Elon Musk’s company Neuralink — perhaps best known for killing 1,500 test animals — into your brain? Are you at least 22 years old and do you have quadriplegia (loss of function in four limbs) from a spinal cord injury, or amyotrophic lateral sclerosis (ALS)?

Then you may qualify to participate in the first-ever volunteer human trials of Neuralink’s first brain-computer interface, which has begun recruitment for participants, as the company announced on its website today. “The PRIME Study (short for Precise Robotically Implanted Brain-Computer Interface) – a groundbreaking investigational medical device trial for our fully-implantable, wireless brain-computer interface (BCI) – aims to evaluate the safety of our implant (N1) and surgical robot (R1) and assess the initial functionality of our BCI for enabling people with paralysis to control external devices with their thoughts,” explains the blog post. The company has courted controversy for testing its implant on monkeys that allegedly resulted in their death (Musk has posted on his social network X, formerly Twitter, that the monkeys were terminally ill, anyway), but that apparently isn’t stopping it from moving forward to try the tech on humans, next, after receiving an exemption from the U.S. Food and Drug Administration in May.

What’s involved in the Neuralink implant human trials?

Read more
Does daytime napping affect your brain health?

ALBAWABA – by Mayar Alkhatieb — According to a study by the University of California and the University of the Republic of Uruguay, regular napping may result in better and greater brain health. The research findings suggest a correlation between napping and larger brains, which has been linked to lower risks of dementia and other […]

Read more
Today’s AI is ‘alchemy,’ not science — what that means and why that matters |

The AI Beat by Sharon Goldman —-  A medieval alchemist wearing a gray robe with a long white beard and glasses adjusts a machine made of beakers and vats A New York Times article this morning, titled “How to Tell if Your AI Is Conscious,” says that in a new report, “scientists offer a list of measurable qualities” based on a “brand-new” science of consciousness. The article immediately jumped out at me, as it was published just a few days after I had a long chat with Thomas Krendl Gilbert, a machine ethicist who, among other things, has long studied the intersection of science and politics. Gilbert recently launched a new podcast, called “The Retort,” along with Hugging Face researcher Nathan Lambert, with an inaugural episode that pushes back on the idea of today’s AI as a truly scientific endeavor. Gilbert maintains that much of today’s AI research cannot reasonably be called science at all. Instead, it can be viewed as a new form of alchemy — that is, the medieval forerunner of chemistry, that can also be defined as a “seemingly magical process of transformation.”

Like alchemy, AI is rooted in ‘magical’ metaphors Many critics of deep learning and of large language models, including those who built them, sometimes refer to AI as a form of alchemy, Gilbert told me on a video call. What they mean by that, he explained, is that it’s not scientific, in the sense that it’s not rigorous or experimental. But he added that he actually means something more literal when he says that AI is alchemy. “The people building it actually think that what they’re doing is magical,” he said. “And that’s rooted in a lot of metaphors, ideas that have now filtered into public discourse over the past several months, like AGI and super intelligence.” The prevailing idea, he explained, is that intelligence itself is scalar — depending only on the amount of data thrown at a model and the computational limits of the model itself.

But, he emphasized, like alchemy, much of today’s AI research is not necessarily trying to be what we know as science, either. The practice of alchemy historically had no peer review or public sharing of results, for example. Much of today’s closed AI research does not, either. “It was very secretive, and frankly, that’s how AI works right now,” he said. “It’s largely a matter of assuming magical properties about the amount of intelligence that is implicit in the structure of the internet — and then building computation and structuring it such that you can distill that web of knowledge that we’ve all been building for decades now, and then seeing what comes out.”

Read more
AI can help screen for cancer—but there’s a catch

 

by MIT Technology Review by Cassandra Willyard — Just last week Microsoft announced that it had partnered with a digital pathology company, Paige, in order to build the world’s largest image-based AI model for identifying cancer. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. I just had a birthday, and you know what that means—I’m newly eligible for a screening colonoscopy. (#milestones!). I’ve been thinking about cancer screening a lot recently, because I’ve seen a handful of headlines in the past few months about how AI will revolutionize cancer detection. Just last week Microsoft announced that it had partnered with a digital pathology company, Paige, in order to build the world’s largest image-based AI model for identifying cancer. The training data set for the algorithm contains 4 million images. “This is sort of a groundbreaking, land-on-the-moon kind of moment for cancer care,” Paige CEO Andy Moye told CNBC.

Well, it might be. Last month, results from the first clinical trial of AI-supported breast cancer screening came out. The researchers compared two methods for reading a mammogram: a standard reading by two independent radiologists, and a system that used a single radiologist and an AI to assign patients a numerical cancer risk score from 1 to 10. In the latter group, those who scored a 10—the highest risk—then had their images read by two radiologists. The AI-supported model reduced workload by 44% and detected 20% more cancers. That sounds like a good thing. In theory, catching cancers earlier should make them easier to treat, saving lives. But that’s not always what the data shows. A study published in late August combed the literature for randomized clinical trials that compared mortality (from any cause, not just cancer) in two groups: people who underwent cancer screening and people who did not. For most common types of cancer screening, they found no significant difference. The exception was sigmoidoscopy, a type of colon cancer screening that involves visualizing only the lower portion of the colon.

Read more
17 Doctors Failed to Diagnose This Boy’s Severe Pain. ChatGPT Came Up With the Answer

by Diane Herbst — today show — For three years, her young son suffered from increasing pain and other symptoms – with no answers from 17 doctors. “We saw so many doctors. We ended up in the ER at one point. I kept pushing,” the boy’s mom, Courtney, who did not wish to reveal her last name for privacy concerns, told Today. Earlier in the year, this frustrated mom turned to ChatGPT to input information from her son Alex’s medical records, including notes from his MRI. “I really spent the night on the (computer) … going through all these things,” she told Today. Courtney finally discovered a correct diagnosis from the artificial intelligence technology. When ChatGPT suggested that Alex, whose medical odyssey started at the age of 4, could be suffering from tethered cord syndrome, a rare neurological condition associated with spina bifida, “it made a lot of sense,” she told Today.

Her hunch was confirmed by a pediatric neurosurgeon: After viewing Alex’s MRI, “she said point blank, ‘Here’s occula spinal bifida, and here’s where the spine is tethered,”’Courtney recalled to the outlet. With the diagnosis, Courtney felt “every emotion in the book, relief, validated, excitement for his future,” she told Today. Tethered cord syndrome is caused by tissue attachments limiting the movement of the spinal cord within the spinal column, causing abnormal stretching of the cord, according to the American Association of Neurological Surgeons. Dr. Holly Gilmer, a pediatric neurosurgeon at the Michigan Head & Spine Institute who treated Alex, told the outlet that the condition is hard to diagnose in young children “because they can’t speak.” Several weeks ago, Alex underwent surgery to repair his tethered cord syndrome and is still recovering, according to Today.

Read more
Why Airbnb CEO Brian Chesky eliminated the ‘fiefdoms’ in his company—and now likens his role to an ‘orchestra conductor’

Story by Steve Mollman — fortune.com — The pandemic hit Airbnb hard. The company lost 80% of its business in March 2020, and people were questioning its ability to survive. Barely two months into the pandemic, it laid off about 1,900 people, or a quarter of its employees. Fast forward to today and not only did it weather the crisis, but in June Airbnb made its debut on the Fortune 500 list of top U.S. public companies by revenue, coming off its first-ever profitable year. \ The turnaround wasn’t easy. Airbnb had to completely reorganize itself. “We shuttered most of the divisions,” CEO Brian Chesky said on a Wednesday episode of The Social Radars podcast. That move was something Airbnb needed to do anyway, he said—as do many startups that have grown into larger organizations, he believes.

For a startup, he explained, it’s tempting to “divisionalize” in order to move faster, since decision-making can become a bottleneck at the top of the organization. But while that might work at first, he added, in the long run it can slow a company down. The problem that the pandemic forced him to face, he said, was that “we had this culture where everyone could do anything. People could own their own projects.” There were too many divisions, or “fiefdoms,” he said, such as ones focused on luxury, pro hosts, a magazine, transportation, and so on. Airbnb had followed a common line of thinking in Silicon Valley, he said. It goes like this: “Basically you share the values of the company, you democratize data, you hire smart people, and you assume that they’ll make the right decisions for the company.”

But, he added, “that is all wrong. It sounds great, and it’s right for some people, but it was wrong for us.” Chesky studied how Steve Jobs revamped a struggling Apple when he returned to the company he’d cofounded, noting how he “shuttered most of the divisions, and he went from a divisional structure to a functional structure.” Adopting a similar strategy, Chesky got rid of the unnecessary divisions at Airbnb. A few core ones would remain, but from then on, he said, “Everyone’s gonna work on everything together. There are no longer swim lanes. There’s one road map, and no one ships anything unless it’s on the roadmap. And then I’m gonna review every single thing in the company before it ships.”

Read more
State Department diverting $85m in Egyptian military financing towards Taiwan and Lebanon

by breakingdefense.com — WASHINGTON — Human rights concerns have prompted the Biden administration to divert $85 million away from a larger foreign military financing (FMF) pot for Egypt and divvy it up between Taiwan and Lebanon, State Department officials announced today. In an email to Breaking Defense, an administration source confirmed that $55 million would be bound for Taiwan and $30 million for Lebanon. In total, Secretary of State Antony Blinken has notified Congress that Washington will provide Cairo with $1.215 billion in FMF from its fiscal 2022 budget, one State Department official told reporters. “Egypt is making specific and ongoing contributions to US national security priorities,” the official said. “Egypt is a strategic partner of the United States with a crucial voice in efforts to advance regional peace and security …. This decision in no way diminishes our commitment to advancing human rights in Egypt and around the world.”

A $1.3 billion FY22 pot included $980 million in funds that were not subject to human rights conditions, and another $320 million that had the conditions attached. From that second coffer, Blinken opted to “waive the certification requirements” due to “US national security interest” for $235 million of that total, the official explained. However, he was not able to do that with the remaining $85 million because it was subject to different statutory certification requirements aimed at ensuring that Egypt is making “clear and consistent progress in releasing political prisoners,” providing detainees with due process and preventing American citizens from being harassed. “That requirement on the $85 million may not be waived [and] the secretary determined that Egypt has not fulfilled those conditions,” the State Department official said. “Therefore, we are reprogramming $85 million [for] other priorities and other countries in consultation with the Congress.” “As we have done for decades, consistent with the Taiwan Relations Act, we will continue to provide defensive articles and services necessary for Taiwan to maintain a sufficient self-defense capability,” an administration official wrote in a statement to Breaking Defense. “We will support Taiwan’s self-defense capabilities commensurate with the threat it faces.” That $55 million joins a $80 million military transfer to Taiwan that the Biden administration unveiled last month.

Read more