View Net Technology Company is a leading technology company that has been at the forefront of innovation since its establishment in 2012. With a strong focus on providing cutting-edge solutions, we have built a reputation for delivering reliable and efficient technology products and services to our clients worldwide.

GET UPDATE

    December 30, 2023
    Author Steven Johnson helped Google create an app that can analyze a writer’s research material and help them extract and explore the key themes. Maybe too well.

    Steven Johnson has written 13 books, on topics ranging from a London cholera outbreak to the value of video games. He’s been a television presenter and a podcast host. He’s a keynote speaker who doesn’t have to call himself that in his LinkedIn profile. And for over a year now, he’s been a full-time employee of Google, a status that’s clear when he badges me into the search giant’s Chelsea offices in New York to show me what his team has been creating.

    It’s called NotebookLM, and the easiest way to think of it is as an AI collaborator with access to all your materials that sits on your metaphorical shoulder to guide you through your project. NotebookLM was soft-launched to a select group earlier this year but is now available to all as an “experiment”—that’s Google’s low-risk way to see how the app behaves and how we behave with the app.

    Johnson found his way to Google by way of a lifelong obsession with software as a “a dynamic thought partner,” a tool to speed up and enhance the creative process. When he was in college he became obsessed with HyperCard, Apple’s software that broke knowledge into chunks and allowed you to navigate an information-space through links. It anticipated web navigation before the web existed. “I fought mightily to turn HyperCard into that dream tool, but it wasn’t quite ready,” he says. He eventually became an enthusiast of Scrivener, a combination word processor and project organizer popular with book authors. (I am a fan too.)

    When Johnson got access to OpenAI’s GPT-3 text generator in 2021, he recognized that AI could level up a new generation of thought tools. Oh, wait, he said to himself, this thing that has always been in the back of my mind is now going to be possible. Scenarios unthinkable even a year before were suddenly on the table. Johnson didn’t yet know that Google not only had similar large language models, but was already working on a project very much in line with his thinking. In May 2022, a small team in the experimental Google Labs division cold-emailed Johnson. They set up a meeting via Starline, a Google Labs project that allows for eerily intimate in-person meetings. “I basically had a conversation with a hologram who said, ‘You know, this thing you’ve been chasing your whole life? We can finally build it,’” Johnson says. He became a part-time adviser to the small team, at first sharing the workflow of a professional writer. “Here’s four or five engineers, here’s an actual author, let’s just watch him,” is how Google Labs head Josh Woodward sums up the process. Eventually Johnson got involved in the development of the product itself and was sucked in to the point of accepting a full-time gig. His title at Google Labs is editorial director.

    NotebookLM, originally called Project Tailwind, starts by creating a data set of your source material, which you drag into the tool from Google Docs or the clipboard. After the app has digested it all, you can then ask NotebookLM questions about your material, thanks to Google’s large language model technology—partly powered by its just-released upgrade Gemini. The answers reflect not only what’s in your source material but also the wider general understanding of the world that Gemini has. A critical feature is that every answer to your queries comes with a set of citations reporting where exactly the information came from, so users can check the accuracy of its output.

    Google is not the only company envisioning products that let people create custom data sets to explore with LLMs. At OpenAI’s developer day last month, the company introduced personalized mini-GPTs that can be tuned to a specific task. Woodward acknowledges a “core similarity.” But he argues that NotebookLM focuses more on enhancing a workflow, and is geared to provide superior accuracy in its outputs. Also, he says that the OpenAI products have more of a personality, while NotebookLM is designed to have no such pretensions.

    I’ve been playing with NotebookLM for a few weeks. The most annoying part of the writing process for me has always been constantly having to leave my manuscript to find the exact information I need in a transcript or document that I want to refer to or quote from. In writing this essay, when I wanted to remind myself of Johnson’s official title, NotebookLM instantly supplied the answer when I requested it. But that’s only one of its more prosaic uses. Deeper functions come in the form of analysis it can provide about your source material—not just the facts but the overall picture they paint. Right after you enter the sources, NotebookLM seems to arrive at its own opinions of what’s important about the topic and can suggest questions for you to ask it and themes to explore. And although Woodward says NotebookLM doesn’t have a personality, it sure likes to talk. Even asking a simple question like Johnson’s title resulted in a list of four bullet points.

    Because my sources were three Googlers and a company blog post, NotebookLM’s outputs not surprisingly reflected what Google wants the world to think of NotebookLM. When asking questions about this source material, I constantly had to remind the app that I wasn’t writing this from Google’s point of view. When I asked NotebookLM to describe itself in the simplest possible way, in hopes it might help me phrase a brief description in the first paragraph of this essay, it responded with its beloved bullet points. I asked it to narrow this down to a single sentence that didn’t read like a PR description. Here’s what it came up with: “NotebookLM is an experimental AI-powered note taking tool that helps you learn faster by reading and understanding your documents, generating summaries, answering your questions, and even helping you brainstorm new ideas.” It’s an impressive summary that came backed by 10 citations, but it did not reflect the most important point of view—mine. That’s appropriate, because it’s up to me to provide that. I’m also glad that NotebookLM didn’t try to impress with a (pathetic) attempt at stylish language, because that’s my job too.

    But here’s my worry. Users of NotebookLM, who simply want to get a good job done quickly might not take the time to do that hard work of thinking. They might not even bother to pore through the research materials themselves. Why take the time when your AI buddy has gone through the material much more closely than you and has already reached some nifty conclusions about it? Johnson doesn’t seem as worried about this as I am. First of all, he notes that users are under no obligation to engage in conceptual discourse with the app: They can happily use it for things like finding that passage where someone’s title is identified and things like that. But he clearly feels it’s a tremendous advantage to engage in such dialog. He’s thrilled that NotebookLM offers suggestions for themes to pursue. And you can even use a mode where NotebookLM can critique your work and argue the opposite side. “If I’m genuinely interested in getting to a unique take, NotebookLM should be able to help me get to that with less hassle,” he says, “and maybe even get to a more interesting take.” Which makes me wonder: Whose take would that now be?

    As we use more AI tools, more heavily, this question is critical. After spending the entire year of 2023 writing and thinking about AI, I can now summarize my key concern, more succinctly than NotebookLM might. Our future will be characterized by a tension between copilot (AI as collaborator) and autopilot (humans as sidekick to AI). The latter is more efficient and cheaper in a narrow labor economics sense but troublesome in all sorts of ways.

    Pointing this out is in no way a dunk on NotebookLM. I’m just exercising my punditry in a way that our current AI models can’t match (at least for now). Meanwhile, working with his Google Labs team, Steven Johnson has accomplished the gold standard for tech products—building the tool he most passionately yearns to use himself. Now he can spend his days at Google building more—and perhaps, suffering the curse of getting what you ask for.

    Time Travel

    Steven Johnson’s passion for thought-supporting tools began with Apple’s massively influential HyperCard software. In my column in the February 1988 issue of Macworld, I meditated on HyperCard myself, trying to asssess the powers of this fascinating program and the idea of navigating through a sea of the world’s information—necessarily through a pre-internet lens. Back then, I was skeptical that such a project could be funded and did not anticipate that it would be a bottom-up enterprise ultimately sped by Google, a company whose mission was to make the world’s information universally accessible. But I was right to anticipate the copyright implications which we’re dealing with now in the age of ChatGPT.

    There is a long line of adherents to this vision, beginning in 1945 with Vannevar Bush and continuing through Ted Nelson, who coined the word hypermedia. Apple chairman Sculley writes of his belief that HyperCard and its descendents will free us from the “constraints of a book’s linear format’’: linking information “the way you think’’ in many cases will obviate the tiresome convention of beginning, middle, and end. Our fiction may begin to resemble novels like Hopscotch, written by the South American writer Julio Cortazar. The Nobel laureate claimed the 155 chapters of his book could be read in any of several different sequences. In the hypermedia world, nonfiction books would not be read front to back, but would be blended into some World Information Bank, each passage linked in millions of ways to other relevant information. To quote Sculley, using this model “enables the user to summon up any information he needs, in the dosage he requires.”

    This strikes me as an unlikely scenario, at least on the scale that some commentators have predicted. An enormous task stands in the way of realizing the hypermedia dream: all the world’s knowledge must be entered as data and put online.

    The problems of copyright and fair use must also be dealt with, and that means a near-infinite number of lawyer-hours. In a world where too many people are unfed and homeless, our space program is dead in the water, corporations are lean and mean, and every spare penny goes for tools of destruction, it is difficult to imagine this multibillion-dollar project ever getting underway.

     

    Share:

    Leave a Reply

    Your email address will not be published. Required fields are marked *