Prompt:
This is a post on Hacker News "Show HN: Chrome extension to summarize blogs and articles using ChatGPT"
Write some examples of comments
Comment 1: "This looks really interesting! I'm always looking for ways to save time and quickly get the main points of an article. I'll definitely give it a try. Thanks for sharing!"
Comment 2: "I'm not sure about using a GPT model for summarization. The quality of the summary might not be very good, and it could potentially be biased or misleading. I think it's better to use a more specialized tool for this task."
Comment 3: "I tried this out and it's really impressive! The summaries it produces are concise and accurate. Plus, it's much faster than reading the whole article. I'm definitely going to keep using this."
Comment 4: "I think this is a great idea and a really useful tool. It's great for people who are short on time but still want to stay informed. Thanks for creating it!"
Comment 5: "I'm not sure if this is the best approach to summarization. I think using a more sophisticated algorithm, like a deep learning model, would produce better results. But overall, it's still a neat concept."
Sorry if this is off-topic, but ChatGPT is blowing my mind, I'm using it to write my Christmas cards this year and it's already made some funny ones.
Dear <Manager>
Wishing you a very Merry Christmas and a Happy New Year! May your days be filled with joy, laughter, and lots of eggnog. Speaking of eggnog, have you heard the one about the manager who tried to manage a team of developers? He kept telling them to "commit" to their work, but they just kept "pushing" him aside.
I've asked it to clean up my code including Makefiles and stuff. A lot of it is way cleaner and higher quality. Maybe that says more about me than ChatGPT.
Its not at all uncommon for it to admit ignorance, though it is less entertaining (and less likely to get posted publicly) than when it hallucinates something.
It's very inconsistent at admitting ignorance, though.
How many galaxies are in the Virgo Cluster?
> The Virgo Cluster is a cluster of galaxies that contains hundreds of individual galaxies. It is one of the largest galaxy clusters in the local universe, and it is located in the constellation of Virgo. The exact number of galaxies in the Virgo Cluster is not known, as it is constantly changing due to the motion of the galaxies within the cluster. However, it is estimated that the cluster contains at least 1,300 galaxies.
How many galaxies are in the Virgo Supercluster?
> It is estimated that the Virgo Supercluster contains approximately 100,000 galaxies.
When I ran the second query an hour ago, it replied that it didn't have access to the internet to look up the information.
If it works well, it could be actually very useful for longer pieces.
I have read so many books with some actually good ideas hammered for +200 pages, (just to justify the cost of printing, satisfy the industry standards, or whatever)
A half decent summary of all those would be of actual value. Get 80% of the value in 20% (or less) of the time.
I think the problem is - there's examples and anecdotes and whatever scattered throughout the book that make those ideas connect for you.
And this is different for everyone.
Maybe an ML you train yourself on highlights would be able to find the stuff that will connect for you - but I'm skeptical enough people read & highlight enough to train ML models to do this (or if it would even work).
Yes there are a number of services, and I would happily pay for them.
But either they have a very small catalog, or their summaries are too short, or both.
All copyrighted material, I don't have links to the actual works, but some books that come to mind that I would have enjoyed much more if their were 20% of pages:
Not that they'd be the particular books OP wants, but if you're looking to summarize large content, perhaps grab it from Project Gutenberg? https://www.gutenberg.org/
I was playing around [0] with gpt and and most of what I started is by chatgpt, however with many fixes. The code it generated looked mostly ok, however it needed many fixes as it was off a lot, especially on api use, promises etc. Because this is all throw-away by definition (and only for localhost use!) and only to see/test/play with the differences with production stuff, it is pretty impressive how fast you can do things.
I have the feeling though that copilot makes less mistakes and learns my style better; chatgpt keeps mixing styles even in the same session. You can prime the prompt and then it works a bit better in that case, I found.
I'm not sure about using a GPT model for summarization. The quality of the summary might not be very good, and it could potentially be biased or misleading. I think it's better to use a more specialized tool for this task.
Great job, I was doing it manually before! It would be nice if it would be possible to continue chatting with chatGPT after the summary. I always find it interesting to probe chatGPT about the article after summarizingi it
anyone else remember Copernic Summarizer? I miss that. When are we getting a self-hosted "GPT-Alike"? Is it something that "federated search engine" project from a few years ago could aid with training?
well, the extension technically 'worked' -- that is, it gave me some text that was allegedly a summary of the article -- it just wasn't a good summary.
ChatGPT's context window is 8192 tokens. A token is about 3-4 characters. OpenAI has an open source tokenizer you can download, too, if you want the exact number of tokens a body of text is.
Good question. I tested it manually with a few articles I could find. If you find a web page too large for ChatGPT then let me know, I can split it into multiple batches and ask ChatGPT to summarize once I'm done.
IIRC, ChatGPT is based on GPT3.5 (likely an even larger model) rather than GPT3. It's also been refined a bit using reinforcement learning.
I've noticed that when I ask ChatGPT to determine the type of a variable in a given code block, its reasoning has fewer holes than GPT3 for the same prompt. Stands to reason that other results will be similarly refined.
It also doesn't appear to have a token limit? Not sure how that feat was accomplished.
> What does 'SSE' stand for in the following code sample: <pasted fetch-sse.js>
"In this code sample, 'SSE' likely stands for "Server-Sent Events". It is the name of the fetchSSE function and it is used to fetch data from a server using the Server-Sent Events protocol. This protocol allows a server to push data to a client in real-time, rather than requiring the client to continually poll the server for updates."
Not sure how accurate this is but it gave me enough information to look into it more!
Write some examples of comments