Khazen

by time.com — BY NIK POPLI — Ammaar Reshi was playing around with ChatGPT, an AI-powered chatbot from OpenAI when he started thinking about the ways artificial intelligence could be used to make a simple children’s book to give to his friends. Just a couple of days later, he published a 12-page picture book, printed it, and started selling it on Amazon without ever picking up a pen and paper.

Reshi, a product design manager from the San Francisco Bay Area, gathered illustrations from Midjourney, a text-to-image AI tool that launched this summer, and took story elements from a conversation he had with the AI-powered ChatGPT about a young girl named Alice. “Anyone can use these tools,” Reshi tells TIME. “It’s easily and readily accessible, and it’s not hard to use either.” The feat, which Reshi publicized in a viral Twitter thread, is a testament to the incredible advances in AI-powered tools like ChatGPT—which took the internet by storm two weeks ago with its uncanny ability to mimic human thought and writing. But the book, Alice and Sparkle, also renewed a fierce debate about the ethics of AI-generated art. Many argued that the technology preys on artists and other creatives—using their hard work as source material, while raising the specter of replacing them.

His experiment creating an AI-generated book in just one weekend shows that artificial intelligence might be able to accomplish tasks faster and more efficiently than any human person can—sort of. The book was far from perfect. The AI-generated illustrations had a number of issues: some fingers looked like claws, objects were floating, and the shadowing was off in some areas. Normally, illustrations in children’s books go through several rounds of revisions—but that’s not always possible with AI-generated artwork on Midjourney, where users type a series of words and the bot spits back an image seconds later. Alice and Sparkle follows a young girl who builds her own artificial intelligence robot that becomes self aware and capable of making its own decisions. Reshi has sold about 70 copies through Amazon since Dec. 4, earning royalties of less than $200. He plans to donate additional copies to his local library. Reshi’s quixotic project drew praise from many users for its ingenuity. But many artists also strongly criticized both his process and the product. To his critics, the speed and ease with which Reshi created Alice and Sparkle exemplifies the ethical concerns of AI-generated art. Artificial intelligence systems like Midjourney are trained using datasets of millions of images that exist across the Internet, then teaching algorithms to recognize patterns in those images and generate new ones. That means any artist who uploads their work online could be feeding the algorithm without their consent. Many claim this amounts to a high-tech form of plagiarism that could seriously harm human artists in the near future. Reshi’s original tweet promoting his book received more than 6 million impressions and 1,300 replies, many of which came from book illustrators claiming artists should be paid or credited if their work is used by AI.

by venturebeat.com — Ben Dickson — For decades, we have personified our devices and applications with verbs such as “thinks,” “knows” and “believes.” And in most cases, such anthropomorphic descriptions are harmless. But we’re entering an era in which we must be careful about how we talk about software, artificial intelligence (AI) and, especially, large language models (LLMs), which have become impressively advanced at mimicking human behavior while being fundamentally different from the human mind. It is a serious mistake to unreflectively apply to artificial intelligence systems the same intuitions that we deploy in our dealings with each other, warns Murray Shanahan, professor of Cognitive Robotics at Imperial College London and a research scientist at DeepMind, in a new paper titled, “Talking About Large Language Models.” And to make the best use of the remarkable capabilities AI systems possess, we must be conscious of how they work and avoid imputing to them capacities they lack.

Humans vs. LLMs

“It’s astonishing how human-like LLM-based systems can be, and they are getting better fast. After interacting with them for a while, it’s all too easy to start thinking of them as entities with minds like our own,” Shanahan told VentureBeat. “But they are really rather an alien form of intelligence, and we don’t fully understand them yet. So we need to be circumspect when incorporating them into human affairs.” Human language use is an aspect of collective behavior. We acquire language through our interactions with our community and the world we share with them. “As an infant, your parents and carers offered a running commentary in natural language while pointing at things, putting things in your hands or taking them away, moving things within your field of view, playing with things together, and so on,” Shanahan said. “LLMs are trained in a very different way, without ever inhabiting our world.” LLMs are mathematical models that represent the statistical distribution of tokens in a corpus of human-generated text (tokens can be words, parts of words, characters or punctuations). They generate text in response to a prompt or question, but not in the same way that a human would do. Shanahan simplifies the interaction with an LLM as such: “Here’s a fragment of text. Tell me how this fragment might go on. According to your model of the statistics of human language, what words are likely to come next?”

When trained on a large-enough corpus of examples, the LLM can produce correct answers at an impressive rate. Nonetheless, the difference between humans and LLMs is extremely important. For humans, different excerpts of language can have different relations with truth. We can tell the difference between fact and fiction, such as Neil Armstrong’s trip to the moon and Frodo Baggins’s return to the Shire. For an LLM that generates statistically likely sentences of words, these distinctions are invisible. “This is one reason why it’s a good idea for users to repeatedly remind themselves of what LLMs really do,” Shanahan writes. And this reminder can help developers avoid the “misleading use of philosophically fraught words to describe the capabilities of LLMs, words such as ‘belief,’ ‘knowledge,’ ‘understanding,’ ‘self,’ or even ‘consciousness.’”

The blurring barriers

When we’re talking about phones, calculators, cars, etc., there is usually no harm in using anthropomorphic language (e.g., “My watch doesn’t realize we’re on daylight savings time”). We know that these wordings are convenient shorthands for complex processes. However, Shanahan warns, in the case of LLMs, “such is their power, things can get a little blurry.” For example, there is a large body of research on prompt engineering tricks that can improve the performance of LLMs on complicated tasks. Sometimes, adding a simple sentence to the prompt, such as “Let’s think step by step,” can improve the LLM’s capability to complete reasoning and planning tasks. Such results can amplify “the temptation to see [LLMs] as having human-like characteristics,” Shanahan warns.

But again, we should keep in mind the differences between reasoning in humans and meta-reasoning in LLMs. For example, if we ask a friend, “What country is to the south of Rwanda?” and they respond, “I think it’s Burundi,” we know that they understand our intent, our background knowledge, and our interests. At the same time, they know our capacity and means to verify their answer, such as looking at a map or googling the term or asking other people.

However, when you ask the same question from an LLM, that rich context is missing. In many cases, some context is provided in the background by adding bits to the prompt, such as framing it in a script-like framework that the AI has been exposed to during training. This makes it more likely for the LLM to generate the correct answer. But the AI doesn’t “know” about Rwanda, Burundi, or their relation to each other. “Knowing that the word ‘Burundi’ is likely to succeed the words ‘The country to the south of Rwanda’ is is not the same as knowing that Burundi is to the south of Rwanda,” Shanahan writes.

While LLMs continue to make progress, as developers, we should be careful how we build applications on top of them. And as users, we should be careful of how we think about our interactions with them. The framing of our mindset about LLMs and AI, in general, can have a great impact on the safety and robustness of their applications. The expansion of LLMs might require a shift in the way we use familiar psychological terms like “believes” and “thinks,” or perhaps the introduction of new words, Shanahan said. “It may require an extensive period of interacting with, of living with, these new kinds of artifacts before we learn how best to talk about them,” Shanahan writes. “Meanwhile, we should try to resist the siren call of anthropomorphism.”