“The future ain’t what it used to be.”

-Yogi Berra

  • 0 Posts
  • 3 Comments
Joined 2 years ago
cake
Cake day: July 29th, 2023

help-circle
  • yeah I think it would be more clarifying if you kept the modes distinct.

    I like the focus on accuracy and it’s citations. I’ve tried deep research (in contrast with chat got) few times and it’s generations have been basically worthless.

    I definitely have a use for something like this, but similarly to most of the issues I have with the applied use of these products, it boils down to a few consistent issues around guard rails, a kind of cautious insistence on singular approaches, and a lack of agency.

    For example, I would live to be able to drop it a guy hub link and have it dig through the repo, and write me an ipynb demonstrating the capabilities of the repo. Or where I could give it a script, sloppy with a bunch of my own garbage in it, and it cleans it up and makes it nice. Deep research is no where near capable of this and I attribute it to an overly cautious development approach on the part of OpenAI. As well, because of structural limits, these models lack the kind of nested or branched thinking that would be required to hold onto big picture goals and concepts.

    I do however think we’ll see things change with the new gpts coming out which are much cheaper to run for inference. Basically, to do the kind of work that deep research claims to be doing, we need a more complex internal model structure with many gpts running in both series and parallel, perhaps in more of a graph model.

    I also don’t think it will be OpenAI to do this. They’ve been too cautious with their development approach.

    At the end of the day I want what deep research claims to be, but it’s clearly not it yet.