Khazen

Law secretly drafted by ChatGPT makes it onto the books ‘Unfortunately or fortunately, this is going to be a trend’

 by Katyanna Quach – the register — The council of Porto Alegre, a city in southern Brazil, has approved legislation drafted by ChatGPT. The ordinance is supposed to prevent the city from charging taxpayers to replace any water meters stolen by thieves. A vote from 36 members of the council unanimously passed the proposal, which […]

Read more
Making an image with generative AI uses as much energy as charging your phone carbon footprint AI

MIT Technology Review by Melissa Heikkilä — Each time you use AI to generate an image, write an email, or ask a chatbot a question, it comes at a cost to the planet. In fact, generating an image using a powerful AI model takes as much energy as fully charging your smartphone, according to a new study by researchers at the AI startup Hugging Face and Carnegie Mellon University. However, they found that using an AI model to generate text is significantly less energy-intensive. Creating text 1,000 times only uses as much energy as 16% of a full smartphone charge. Their work, which is yet to be peer reviewed, shows that while training massive AI models is incredibly energy intensive, it’s only one part of the puzzle. Most of their carbon footprint comes from their actual use.

The study is the first time researchers have calculated the carbon emissions caused by using an AI model for different tasks, says Sasha Luccioni, an AI researcher at Hugging Face who led the work. She hopes understanding these emissions could help us make informed decisions about how to use AI in a more planet-friendly way. Luccioni and her team looked at the emissions associated with 10 popular AI tasks on the Hugging Face platform, such as question answering, text generation, image classification, captioning, and image generation. They ran the experiments on 88 different models. For each of the tasks, such as text generation, Luccioni ran 1,000 prompts, and measured the energy used with a tool she developed called Code Carbon. Code Carbon makes these calculations by looking at the en

Read more
How ChatGPT changed the world of tech in just one year

 by Daniel Howley· Yahoo Technology Editor — It’s been quite a year for OpenAI. In the last few weeks alone, the company survived an attempted coup in which co-founder and CEO Sam Altman was fired and then rehired following pushback from employees and big-name investors like Microsoft (MSFT). And that’s not even the most interesting part of the story. Exactly one year ago tomorrow, OpenAI’s generative AI-powered ChatGPT hit the web, quickly becoming one of the fastest growing apps in history and setting off an AI gold rush that continues to reverberate across the technology industry and beyond. Companies ranging from Google (GOOG, GOOGL) and Microsoft, an OpenAI investor, to Amazon (AMZN), Meta (META), and others are racing to build out their own generative AI-powered software platforms.

On the hardware front, the AI explosion has made Nvidia, the world’s leading AI chip developer, the hottest semiconductor company on Earth, again. Year to date, shares of Nvidia are up more than 200%. Intel (INTC) and AMD (AMD), meanwhile, are up 67% and 90%, respectively. “We all understand ChatGPT was a critical inflection point in the history of AI, in spite of the fact that it’s only a year out since its initial release,” Rishi Bommasani, the society lead at Stanford’s Center for Research on Foundation Models, told Yahoo Finance. But ChatGPT, and generative AI more generally, have raised questions about data usage rights and the potential to create and spread disinformation via images and videos. “While [generative AI] tools are empowering us in so many ways, with so many kinds of superpowers, it’s interesting to consider that the same tools can also apply to what supervillains want to do,” explained Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. “And so we have to think about what guardrails we need to put in place before we deploy the tools so that we ensure that the use is a good one.”

Read more
Pope called Cardinal Burke his ‘enemy’ and threatened to strip him of privileges, reports claim

By Thomas Colsy — catholic Herald — Pope Francis has referred to an outspoken American cardinal as his “enemy” and threatened to strip him of his privileges, according to reports from Italy. The New Daily Compass, which claims the rumour has been confirmed by multiple sources in the Vatican, reported that the Pope was overheard […]

Read more
Contrary to reports, OpenAI probably isn’t building humanity-threatening AI

by Kyle Wiggers@kyle_l_wiggers — TechCrunch Has OpenAI invented an AI technology with the potential to “threaten humanity”? From some of the recent headlines, you might be inclined to think so. Reuters and The Information first reported last week that several OpenAI staff members had, in a letter to the AI startup’s board of directors, flagged the “prowess” and “potential danger” of an internal research project known as “Q*.” This AI project, according to the reporting, could solve certain math problems — albeit only at grade-school level — but had in the researchers’ opinion a chance of building toward an elusive technical breakthrough. There’s now debate as to whether OpenAI’s board ever received such a letter — The Verge cites a source suggesting that it didn’t. But the framing of Q* aside, Q* in actuality might not be as monumental — or threatening — as it sounds. It might not even be new. AI researchers on X (formerly Twitter), including Meta’s chief AI scientist Yann LeCun, were immediately skeptical that Q* was anything more than an extension of existing work at OpenAI — and other AI research labs besides. In a post on X, Rick Lamers, who writes the Substack newsletter Coding with Intelligence, pointed to an MIT guest lecture OpenAI co-founder John Schulman gave seven years ago during which he described a mathematical function called “Q*.”

Several researchers believe the “Q” in the name “Q*” refers to “Q-learning,” an AI technique that helps a model learn and improve at a particular task by taking — and being rewarded for — specific “correct” actions. Researchers say the asterisk, meanwhile, could be a reference to A*, an algorithm for checking the nodes that make up a graph and exploring the routes between these nodes. Both have been around a while. Google DeepMind applied Q-learning to build an AI algorithm that could play Atari 2600 games at human level… in 2014. A* has its origins in an academic paper published in 1968. And researchers at UC Irvine several years ago explored improving A* with Q-learning — which might be exactly what OpenAI’s now pursuing.

Nathan Lambert, a research scientist at the Allen Institute for AI, told TechCrunch he believes that Q* is connected to approaches in AI “mostly [for] studying high school math problems” — not destroying humanity. “OpenAI even shared work earlier this year improving the mathematical reasoning of language models with a technique called process reward models,” Lambert said, “but what remains to be seen is how better math abilities do anything other than make [OpenAI’s AI-powered chatbot] ChatGPT a better code assistant.”

Read more
AI founders list they need to know for 2024

Here’s a short list of posts for AI founders looking ahead to 2024: Startups must add AI value beyond ChatGPT integration: One criticism of startups that claim the mantle of AI is that they are creating thin wrappers around other folk’s technology. This sort of platform risk is not new, but is a pertinent mental model […]

Read more
Mapped: The Migration of the World’s Millionaires in 2023

 

The world’s millionaires are on the move, and their migration patterns are shifting. In 2023, 122,000 high net worth individuals (HNWIs) are expected to move to a new country, with Australia reclaiming the top spot as the most popular destination. The United Arab Emirates, Singapore, the United States, and Switzerland round out the top five countries for HNWI inflows. At the other end of the spectrum, China is expected to lose the most HNWIs in 2023, with 13,500 millionaires leaving the country. India, the United Kingdom, Russia, and Brazil follow closely behind.

Why are millionaires moving? The reasons vary, but economic freedom, tax burdens, and investment opportunities are key factors. Singapore, which boasts the highest level of economic freedom in the world, is a popular destination for HNWIs. Greece, despite its economic challenges, is also expected to see a significant influx of millionaires due to its golden visa program. 

The impact of HNWI migration goes beyond the economic. It also has geopolitical implications, as governments compete to attract and retain the world’s economic elite.

Read more
The Top 2 Artificial Intelligence (AI) Companies Revolutionizing the Industry Right Now

by motley fool — Artificial intelligence (AI) and machine learning (ML) are more than just buzzworthy terms for some cutting-edge companies. They are the foundations on which incredible businesses have been built. Even better, some of these companies make hay in industries essential to the economy. Cybersecurity is top of mind for C-suite executives in all industries, government agencies, school districts, and even nonprofits. Cybercriminals are always on the prowl, costing organizations billions each year. IBM notes that up to 90% of cyberattacks and 70% of breaches come through endpoint devices. AI-powered CrowdStrike Holdings (NASDAQ: CRWD) is the leader in endpoint security with a comprehensive, entirely cloud-based platform. The company’s results are on fire, as I’ll discuss below. Meanwhile, data centers are crucial for cloud applications, data storage, computing power, and (definitely) complex AI and ML software that require massive computing power. Nvidia (NASDAQ: NVDA) is light-years ahead of its competition, and its data center software and hardware are mission critical. This is why its data center revenue rose 171% year over year last quarter to $10.32 billion.

CrowdStrike is firing on all cylinders

CrowdStrike provides comprehensive security with its Falcon platform. The advantages are several: Falcon is cloud-native (no on-premises hardware required), customizable, and uses AI to analyze data and provide real-time protection. The platform is modular, so customers can choose which modules they want or need. This plays into CrowdStrike’s land-and-expand strategy: It gains a customer, proves the platform’s worth, and then the customer adds more modules — creating more revenue. This shows up in the company’s dollar-based net retention rate (DBNR), which has been above 120% dating back to the first quarter of fiscal 2019. DBNR measures the year-over-year increase in sales from an average customer. Above 100% is good, and above 120% is excellent. You can probably guess how the chart of annual recurring revenue (ARR) growth looks: The meteoric rise to $2.9 billion in ARR has enabled CrowdStrike to generate $416 million in free cash flow through the second quarter of this 2024 fiscal year and stack up $3.2 billion in cash against $742 million in long-term debt. Having cash on hand to fund growth is crucial in this environment, and the company likely won’t have to borrow money at unfavorable interest rates.

Read more
The Biggest Questions: What is death?

MIT Technology Review by Rachel Nuwer — Just as birth certificates note the time we enter the world, death certificates mark the moment we exit it. This practice reflects traditional notions about life and death as binaries. We are here until, suddenly, like a light switched off, we are gone. But while this idea of death is pervasive, evidence is building that it is an outdated social construct, not really grounded in biology. Dying is in fact a process—one with no clear point demarcating the threshold across which someone cannot come back. Scientists and many doctors have already embraced this more nuanced understanding of death. As society catches up, the implications for the living could be profound. “There is potential for many people to be revived again,” says Sam Parnia, director of critical care and resuscitation research at NYU Langone Health.

Neuroscientists, for example, are learning that the brain can survive surprising levels of oxygen deprivation. This means the window of time that doctors have to reverse the death process could someday be extended. Other organs likewise seem to be recoverable for much longer than is reflected in current medical practice, opening up possibilities for expanding the availability of organ donations. To do so, though, we need to reconsider how we conceive of and approach life and death. Rather than thinking of death as an event from which one cannot recover, Parnia says, we should instead view it as a transient process of oxygen deprivation that has the potential to become irreversible if enough time passes or medical interventions fail. If we adopt this mindset about death, Parnia says, “then suddenly, everyone will say, ‘Let’s treat it.’”

Read more
Sam Altman’s return to OpenAI highlights urgent need for trust and diversity

by Matt Marshall — @mmarshall venturebeat — OpenAI’s announcement last night apparently resolved the saga that has beset it for the last five days: It is bringing back Sam Altman as CEO, and it has agreed on three initial board members – and more is to come. However, as more details emerge from sources about what set off the chaos at the company in the first place, it’s clear the company needs to shore up a trust issue that may potentially bedevil Altman as a result of his recent actions at the company.

It’s also not clear how it intends to clean up remaining thorny governance issues, including its board structure and mandate, that have become confusing and even contradictory. For enterprise decision makers, who are watching this saga, and wondering what this all means to them, and about the credibility of OpenAI going forward, it’s worth looking at the details of how we got here. After doing so, here’s where I’ve come out: The outcome, at least as it looks right now, heralds OpenAI’s continued shift toward a more aggressive stance as a product-oriented business. I predict that OpenAI’s position as a serious contender in providing full-service AI products for enterprises, a role that demands trust and optimal safety, may diminish. However, its language models, specifically ChatGPT and GPT-4, will likely remain highly popular among developers and continue to be used as APIs in a wide range of AI products.

More on that in a second, but first a look at the trust factor that hangs over the company, and how it needs to be dealt with. The good news is that the company has made strong headway by appointing some very credible initial board members, Bret Taylor and Lawrence Summers, and putting some strong guardrails in place. The outgoing board has insisted that an investigation be made into Altman’s leadership, and has blocked Altman and his co-founder Greg Brockman’s return to the board, and have insisted that new board members be strong enough to be able to stand up to Altman, according to the New York Times.

Read more