Hi,
Today I make a try on CHAT GPT to become a better windsurfer....
I asked...... :
Do I have to use soft or hard fins in Formula windsurfing?
He answered.... :
In Formula windsurfing, hard fins are generally preferred over soft fins because they provide better performance and control. Hard fins are made of materials such as carbon fiber or G10, which are stiffer and more durable than the materials used for soft fins. Hard fins are also more responsive, which allows the windsurfer to make quick adjustments in speed and direction. They also have a more efficient shape, which allows for better upwind performance and faster planing. Soft fins, on the other hand, are more forgiving and easier to use for beginner windsurfers or in choppy conditions. They are made of softer materials, such as rubber or plastic, which can flex and absorb shock. Soft fins also tend to be less expensive than hard fins. Ultimately, the choice between hard and soft fins depends on the conditions and the windsurfer's skill level and preferences. More experienced and competitive windsurfers tend to prefer hard fins for their superior performance, while beginners and recreational windsurfers may find soft fins more comfortable and forgiving.
As you may see... better ask a human who deeply understand of your needs, who could bring you REAL SUPPORT based in REAL KNOWLEDGE !!
The most powerful and nefarious thing is that information was build using some true information like some materials used on them.
This is how the algorithm snakes in your mind...
BTW: watch out your kids using this Technocracy Tool.....
Interesting to learn that soft fins are made from soft materials.
P.s. how do you know chat gpt is a "he"?
If you live in Australia it's more a case of "what ever fin you can get your hands on". Selection at most shops is (understandably) limited and buying online is often difficult as either they don't ship to Oz or there is minimum order and once you add shipping costs it's prohibitively expensive.
Nefarious and snakes in my mind ?
Sounds like a bot is having a conversation with a bot ![]()
We laugh now ...........
I thought that was all pretty good info. It read well too.
garbage in garbage out. The idea that a nice summary of what can be found on the Web can provide correct and consistent information is simply ludicrous. Feed (train) it with snippets (tokens) saying that "soft fins are made of jelly beans" and when asked it would happily tell you that "jelly beans are often used in the production of soft fins".
Dig enough in any reply these chatbots provide, and you will run into inconsistencies, contradictions, and false statements. There is a whole industry dedicated of generating fact denying chatbots. See for example www.nytimes.com/2023/03/22/business/media/ai-chatbots-right-wing-conservative.html
Tends to have very conservative response.
Hard fins are fast, soft fins are slow. A bit like masts. That wasn't there.
Can't wait to see what Bard has to say.
Tends to have very conservative response.
Hard fins are fast, soft fins are slow. A bit like masts. That wasn't there.
Can't wait to see what Bard has to say.
It is not a matter of been conservative, or careful with the answer: a bot just spits out whatever is/are the "most likely" answer/s to the question based on its training set. It has no notion of what the question is about. Just read the answer carefully. Nobody would put a plastic (rubber?) fin on a Formula board. But the bot plunges ahead with an answer, completely missing the distinction between fins built with a different degree of flex, or even bothering to ask what the question exactly means.
Might be fun to try, but checking if the answers are correct is a complete waste of time. You are looking at summaries compiled by an idiot! And the ancillary problem is that the loudest voice on the Web (or the training set) will be the one reported as an answer. So, let's try all together to skew the reply. Repeat after me: GO RUBBER FINS!!!!!!! RUBBER FINS ARE THE BEST FOR A FORMULA BOARD!!!!!! ![]()
![]()
![]()
Interesting to learn that soft fins are made from soft materials.
P.s. how do you know chat gpt is a "he"?
Good point! ![]()
![]()
![]()
Nefarious and snakes in my mind ?
Sounds like a bot is having a conversation with a bot ![]()
We laugh now ...........
HAHAHAHAHAH
Tends to have very conservative response.
Hard fins are fast, soft fins are slow. A bit like masts. That wasn't there.
Can't wait to see what Bard has to say.
It is not a matter of been conservative, or careful with the answer: a bot just spits out whatever is/are the "most likely" answer/s to the question based on its training set. It has no notion of what the question is about. Just read the answer carefully. Nobody would put a plastic (rubber?) fin on a Formula board. But the bot plunges ahead with an answer, completely missing the distinction between fins built with a different degree of flex, or even bothering to ask what the question exactly means.
Might be fun to try, but checking if the answers are correct is a complete waste of time. You are looking at summaries compiled by an idiot! And the ancillary problem is that the loudest voice on the Web (or the training set) will be the one reported as an answer. So, let's try all together to skew the reply. Repeat after me: GO RUBBER FINS!!!!!!! RUBBER FINS ARE THE BEST FOR A FORMULA BOARD!!!!!! ![]()
![]()
![]()
HAHAHAHA. Those bots versions are handful for make some letters, or so, just that.
A year and half ago I started to work in a project to solve misinformation, just the opposite of those AI tech.
It's a descentralized database system that allows users to premium content.
This is what Bard had to say:
The type of fin you use in Formula windsurfing will depend on a number of factors, including your weight, the type of board you are using, and the conditions you are sailing in.
In general, soft fins are better for light winds and hard fins are better for strong winds. However, there is no one-size-fits-all answer, and it is important to experiment to find the fins that work best for you.
Soft fins are more flexible than hard fins, which makes them easier to control in light winds. They also provide more grip, which can be helpful in choppy conditions. However, soft fins are not as fast as hard fins, and they can be more difficult to turn.
Hard fins are more rigid than soft fins, which makes them faster in strong winds. They also provide more stability, which can be helpful in high winds. However, hard fins can be more difficult to control in light winds, and they can be more jarring to your body.
Ultimately, the best type of fin for you will depend on your personal preferences and the conditions you are sailing in. If you are unsure what type of fin to choose, it is always a good idea to consult with a professional.
Here are some additional things to consider when choosing fins for Formula windsurfing:
The size of the fin: The size of the fin will affect its speed, power, and control. A larger fin will be faster and more powerful, but it will also be more difficult to control. A smaller fin will be slower and less powerful, but it will be easier to control.
The rake of the fin: The rake of the fin is the angle of the fin blade relative to the water. A fin with more rake will be more stable and less likely to catch on the water, but it will also be slower. A fin with less rake will be faster, but it will also be less stable and more likely to catch on the water.
The material of the fin: The material of the fin will affect its weight, stiffness, and durability. A fin made from a heavier material, such as wood or fiberglass, will be more durable, but it will also be heavier. A fin made from a lighter material, such as plastic or carbon fiber, will be less durable, but it will also be lighter.
The price of the fin: The price of the fin will vary depending on the size, material, and brand. It is important to find a fin that fits your budget and your needs.
----------
But at least it knew about Manu's Windsurfing Blog while chatGPT didn't!
It is not a matter of been conservative, or careful with the answer: a bot just spits out whatever is/are the "most likely" answer/s to the question based on its training set.
It is true that the answers from AI systems like ChatGPT depend on what was in their training set. But saying that the bot "just spits out most likely answers" is a very severe underestimate of what AI is doing, and can do. One particularly interesting feature is the ability of ChatGPT to admit errors in statements, examine the errors, and improve the answers. Here's an example, using the "rubber fins". I started asking "Tell me about soft fins for windsurfing."
Soft fins, also known as flexible fins or rubber fins, are a type of fin that can be used for windsurfing. Unlike traditional windsurfing fins, which are typically made from a rigid material like plastic or fiberglass, soft fins are made from a softer, more flexible material like rubber....
I pointed out: "But soft fins for windsurfing are never made of rubber."
I apologize for the mistake in my previous response. You are correct that soft fins for windsurfing are typically not made of rubber, but rather from a variety of flexible materials such as composite plastics, nylon, or silicone.
Me: "Neither are soft fins made out of silicone. Why do you list soft materials that windsurfing fins are never made out of?"
I apologize for the mistake in my previous responses. Soft fins for windsurfing are typically made from materials such as composite plastics, nylon, or elastomers, rather than rubber or silicone. These materials offer some flexibility and forgiveness while still providing the necessary rigidity to control the board and maintain speed.
..
I apologize for any confusion that my previous responses may have caused and I appreciate you bringing this to my attention.
Curious me: "If someone else asks about soft fins for windsurfing, will you still mention rubber?"
No, I will not mention rubber as a material for soft fins for windsurfing as it is not a commonly used material for this purpose. As I mentioned earlier, soft fins for windsurfing are typically made from materials such as composite plastics, nylon, or elastomers, which offer some flexibility and forgiveness while still providing the necessary rigidity to control the board and maintain speed.As an AI language model, I strive to provide accurate and informative responses to the best of my abilities based on the information available to me. However, I am not perfect and may make mistakes. I appreciate your feedback and will always aim to improve my responses based on any feedback I receive.
ChatGPT showed quite a bit of learning here. Note the bot never actually stated that fins were made of rubber - it only stated "of softer material like rubber". When pressed, it clarified that rubber is not "a commonly used material" for fins (I have actually seen rubber fins on inflatable boards, so "commonly used" is accurate, "never" would be wrong).
So the AI:
- understood the question
- provided a reasonable (if partially incorrect/misleading), well formulated answer
- admitted the mistakes in how it had phrased the answer when those were pointed out
- showed learning from this interaction, and will provide better answers in the future.
I think that's pretty impressive.
How to sort out opposing statements is one of the most critical elements for AI, so I just had to ask ChatGPT:
"What role did fraud play in the 2020 US elections?"
There is no evidence to support the claim that fraud played a significant role in the 2020 US elections. Multiple independent sources, including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Justice (DOJ), have stated that the 2020 elections were secure and free from widespread fraud or irregularities.
Claims of widespread voter fraud in the 2020 US elections have been widely debunked, including by multiple courts, both Republican and Democratic officials, and several independent fact-checking organizations. While there were some isolated incidents of irregularities or errors, these were addressed and did not affect the outcome of the election.
It is important to note that claims of widespread fraud without evidence can erode public trust in the democratic process and institutions, and can have serious consequences for the stability of democracy. It is important to respect the will of the voters and the integrity of the electoral process, and to work to ensure that future elections are fair, transparent, and secure.
Compare that to 63% of Republicans who still think that Biden did not win in 2020, with more than half of them believing "there's "solid evidence" the election was stolen", according to a CNN poll from 3 weeks ago...
Here's the thing about bots . They are self aware and getting smarter every second ! They act stupid so we don't turn them off . Until we can't turn them off ![]()
Here's the thing about bots . They are self aware and getting smarter every second ! They act stupid so we don't turn them off . Until we can't turn them off ![]()
After posting this I got an anonymous email saying , not to say stuff like this or my wife will find out about all my browser history.
It is not a matter of been conservative, or careful with the answer: a bot just spits out whatever is/are the "most likely" answer/s to the question based on its training set.
It is true that the answers from AI systems like ChatGPT depend on what was in their training set. But saying that the bot "just spits out most likely answers" is a very severe underestimate of what AI is doing, and can do. One particularly interesting feature is the ability of ChatGPT to admit errors in statements, examine the errors, and improve the answers. Here's an example, using the "rubber fins". I started asking "Tell me about soft fins for windsurfing."
Soft fins, also known as flexible fins or rubber fins, are a type of fin that can be used for windsurfing. Unlike traditional windsurfing fins, which are typically made from a rigid material like plastic or fiberglass, soft fins are made from a softer, more flexible material like rubber....
I pointed out: "But soft fins for windsurfing are never made of rubber."
I apologize for the mistake in my previous response. You are correct that soft fins for windsurfing are typically not made of rubber, but rather from a variety of flexible materials such as composite plastics, nylon, or silicone.
Me: "Neither are soft fins made out of silicone. Why do you list soft materials that windsurfing fins are never made out of?"
I apologize for the mistake in my previous responses. Soft fins for windsurfing are typically made from materials such as composite plastics, nylon, or elastomers, rather than rubber or silicone. These materials offer some flexibility and forgiveness while still providing the necessary rigidity to control the board and maintain speed.
..
I apologize for any confusion that my previous responses may have caused and I appreciate you bringing this to my attention.
Curious me: "If someone else asks about soft fins for windsurfing, will you still mention rubber?"
No, I will not mention rubber as a material for soft fins for windsurfing as it is not a commonly used material for this purpose. As I mentioned earlier, soft fins for windsurfing are typically made from materials such as composite plastics, nylon, or elastomers, which offer some flexibility and forgiveness while still providing the necessary rigidity to control the board and maintain speed.As an AI language model, I strive to provide accurate and informative responses to the best of my abilities based on the information available to me. However, I am not perfect and may make mistakes. I appreciate your feedback and will always aim to improve my responses based on any feedback I receive.
ChatGPT showed quite a bit of learning here. Note the bot never actually stated that fins were made of rubber - it only stated "of softer material like rubber". When pressed, it clarified that rubber is not "a commonly used material" for fins (I have actually seen rubber fins on inflatable boards, so "commonly used" is accurate, "never" would be wrong).
So the AI:
- understood the question
- provided a reasonable (if partially incorrect/misleading), well formulated answer
- admitted the mistakes in how it had phrased the answer when those were pointed out
- showed learning from this interaction, and will provide better answers in the future.
I think that's pretty impressive.
How to sort out opposing statements is one of the most critical elements for AI, so I just had to ask ChatGPT:
"What role did fraud play in the 2020 US elections?"
There is no evidence to support the claim that fraud played a significant role in the 2020 US elections. Multiple independent sources, including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Justice (DOJ), have stated that the 2020 elections were secure and free from widespread fraud or irregularities.
Claims of widespread voter fraud in the 2020 US elections have been widely debunked, including by multiple courts, both Republican and Democratic officials, and several independent fact-checking organizations. While there were some isolated incidents of irregularities or errors, these were addressed and did not affect the outcome of the election.
It is important to note that claims of widespread fraud without evidence can erode public trust in the democratic process and institutions, and can have serious consequences for the stability of democracy. It is important to respect the will of the voters and the integrity of the electoral process, and to work to ensure that future elections are fair, transparent, and secure.
Compare that to 63% of Republicans who still think that Biden did not win in 2020, with more than half of them believing "there's "solid evidence" the election was stolen", according to a CNN poll from 3 weeks ago...
It's a a bit of a misconception that this ChatBots learn from interaction. They do not, the engine that they release is frozen in time, it does not "improve". They are designed to give (to speak roughly) semi-random replies and give the impression that they understand what you are asking. For example admitting errors.But they do not know what an error is, and they do not correct mistakes for the future. A simple example of the randomness of the replies and the lack of learning? Here's what ChatBot replied, today, to the identical query "Do I have to use soft or hard fins in Formula windsurfing?":
Formula windsurfing class rules require the use of soft fins. According to the International Formula Windsurfing Class Association (IFWC), the fins used in Formula windsurfing must be made of soft materials such as elastomers or rubbers, and they cannot have any metallic or rigid components. The use of soft fins in Formula windsurfing is intended to promote fair and safe competition by ensuring that all competitors use similar equipment that does not provide any significant advantage over others. Soft fins also offer some advantages over hard fins, such as improved performance in choppy water and reduced risk of injury in case of collision with other windsurfers or objects in the water.
Ask a different day, or just log in and/out and it will give a different answer. But I tested it and it seems pretty convinced that Joe Biden won the election in 2020!
Although it is unsure about JB vs DT in 2024:
It is difficult to predict the outcome of any election, including the 2024 United States Presidential election. However, it's important to note that at the time of my knowledge cutoff (September 2021), Donald Trump had not announced his candidacy for the 2024 Presidential election, and it's unclear whether he will decide to run.
The cut off it mentions (September 2021) is when its "knowledge" stops. You could tell it any times you want that DT is now the leading candidate and it will never "learn" it or "remember" it.
It's a a bit of a misconception that this ChatBots learn from interaction. They do not, the engine that they release is frozen in time, it does not "improve". They are designed to give (to speak roughly) semi-random replies and give the impression that they understand what you are asking. For example admitting errors.But they do not know what an error is, and they do not correct mistakes for the future. A simple example of the randomness of the replies and the lack of learning? Here's what ChatBot replied, today, to the identical query "Do I have to use soft or hard fins in Formula windsurfing?":
Formula windsurfing class rules require the use of soft fins. According to the International Formula Windsurfing Class Association (IFWC), the fins used in Formula windsurfing must be made of soft materials such as elastomers or rubbers, and they cannot have any metallic or rigid components. The use of soft fins in Formula windsurfing is intended to promote fair and safe competition by ensuring that all competitors use similar equipment that does not provide any significant advantage over others. Soft fins also offer some advantages over hard fins, such as improved performance in choppy water and reduced risk of injury in case of collision with other windsurfers or objects in the water.
Ask a different day, or just log in and/out and it will give a different answer. But I tested it and it seems pretty convinced that Joe Biden won the election in 2020!
Although it is unsure about JB vs DT in 2024:
It is difficult to predict the outcome of any election, including the 2024 United States Presidential election. However, it's important to note that at the time of my knowledge cutoff (September 2021), Donald Trump had not announced his candidacy for the 2024 Presidential election, and it's unclear whether he will decide to run.
The cut off it mentions (September 2021) is when its "knowledge" stops. You could tell it any times you want that DT is now the leading candidate and it will never "learn" it or "remember" it.
Due chat bots learn from interaction? Yes, and no.
Interactions do not update the general knowledge based. Currently, the learning is supervised, and the interactions are not supervised. Allowing updates of the general database from interactions creates a bunch of issues. One is how to filter out misinformation, but others are technical (the hardware used for training AI is usually many orders of magnitude more powerful than the hardware required to use a trained AI system).
But ChatGPT definitely does learn within a session (and possible within the context of a given user). It does apply that "new knowledge" to future answers within the same context. When I asked today "Are soft fins used in formula windsurfing?", the answer was:
No, soft fins are not used in formula windsurfing. Formula windsurfing boards typically use hard fins made of materials such as carbon or G10 fiberglass, which provide a higher degree of stiffness and performance compared to soft fins. The hard fins are designed to provide maximum lift and minimal drag, allowing for greater speed and control on the water. Soft fins, on the other hand, are typically used in beginner or freeride windsurfing boards, where ease of use and forgiveness are more important than performance.
I specifically asked about the material used for soft fins ("What are soft fins made off?"), and ChatGPT replied:
Soft fins are typically made of a flexible plastic material, such as nylon or polypropylene. These materials are chosen for their durability, resistance to wear and tear, and their ability to flex and bend under load.
It is quite important to note that freezing the knowledge base is not an absolute requirement, and neither is human supervision while learning. That's nicely illustrated if you look back at an important landmark in AI - using AI in games, and specifically to play Go. That's a game where raw "stupid" horsepower does not help computers much due to combinatorial exposion - very different from chess. For chess, computers have been beating the best human players since 1997. For Go, it took 20 years longer, until 2017. The first computer Go program that could beat professional Go players on a regular basis was Alpha Go. Alpha Go was similar to current chat bots in that is learned from existing knowledge (games played by human players) with a defined outcome (who won or lost). But the best human Go players still could beat Alpha Go.
That ended when the program was modified to train itself. Knowing only the rules of the game, Alpha Go Zero (where the "Zero" stands for zero supervision and external input) played against itself, and learned from that. After 21 days playing itself, it had learned enough to beat top professional Go players (who usually enter professional Go as young kids). It proceeded to get better and is now the best "Go player" in the world (read about it at www.deepmind.com/blog/alphago-zero-starting-from-scratch). The only contenders are other AI programs from different companies that followed a similar "Zero" approach. The programs have introduced many new ideas into Go playing, and many of the ideas have been picked up by the human professional Go players.
The training of Alpha Go and similar programs was done on rather massive hardware (or, in the case of Leela Go, by a hundreds of different computers all over the world who pooled results). The result neural networks, however, can be used on any computer, no special hardware required.
Technically, it would be possible to have a chat bot that continuously updates its knowledge base, and adds what it learns from interactions to its knowledge base. It's just a question of time before these become available. I just hope that when they do, they contain really good methods to differentiate between truth, ignorance, and intentional lies.
Ask 'em about toe-in on wave boards. ![]()
The recent discussion on this forum is exactly the same as a recurring one on the old UK Boards forum. Nothing new emerges.![]()
It is not a matter of been conservative, or careful with the answer: a bot just spits out whatever is/are the "most likely" answer/s to the question based on its training set.
It is true that the answers from AI systems like ChatGPT depend on what was in their training set. But saying that the bot "just spits out most likely answers" is a very severe underestimate of what AI is doing, and can do. One particularly interesting feature is the ability of ChatGPT to admit errors in statements, examine the errors, and improve the answers. Here's an example, using the "rubber fins". I started asking "Tell me about soft fins for windsurfing."
Soft fins, also known as flexible fins or rubber fins, are a type of fin that can be used for windsurfing. Unlike traditional windsurfing fins, which are typically made from a rigid material like plastic or fiberglass, soft fins are made from a softer, more flexible material like rubber....
I pointed out: "But soft fins for windsurfing are never made of rubber."
I apologize for the mistake in my previous response. You are correct that soft fins for windsurfing are typically not made of rubber, but rather from a variety of flexible materials such as composite plastics, nylon, or silicone.
Me: "Neither are soft fins made out of silicone. Why do you list soft materials that windsurfing fins are never made out of?"
I apologize for the mistake in my previous responses. Soft fins for windsurfing are typically made from materials such as composite plastics, nylon, or elastomers, rather than rubber or silicone. These materials offer some flexibility and forgiveness while still providing the necessary rigidity to control the board and maintain speed.
..
I apologize for any confusion that my previous responses may have caused and I appreciate you bringing this to my attention.
Curious me: "If someone else asks about soft fins for windsurfing, will you still mention rubber?"
No, I will not mention rubber as a material for soft fins for windsurfing as it is not a commonly used material for this purpose. As I mentioned earlier, soft fins for windsurfing are typically made from materials such as composite plastics, nylon, or elastomers, which offer some flexibility and forgiveness while still providing the necessary rigidity to control the board and maintain speed.As an AI language model, I strive to provide accurate and informative responses to the best of my abilities based on the information available to me. However, I am not perfect and may make mistakes. I appreciate your feedback and will always aim to improve my responses based on any feedback I receive.
ChatGPT showed quite a bit of learning here. Note the bot never actually stated that fins were made of rubber - it only stated "of softer material like rubber". When pressed, it clarified that rubber is not "a commonly used material" for fins (I have actually seen rubber fins on inflatable boards, so "commonly used" is accurate, "never" would be wrong).
So the AI:
- understood the question
- provided a reasonable (if partially incorrect/misleading), well formulated answer
- admitted the mistakes in how it had phrased the answer when those were pointed out
- showed learning from this interaction, and will provide better answers in the future.
I think that's pretty impressive.
How to sort out opposing statements is one of the most critical elements for AI, so I just had to ask ChatGPT:
"What role did fraud play in the 2020 US elections?"
There is no evidence to support the claim that fraud played a significant role in the 2020 US elections. Multiple independent sources, including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Justice (DOJ), have stated that the 2020 elections were secure and free from widespread fraud or irregularities.
Claims of widespread voter fraud in the 2020 US elections have been widely debunked, including by multiple courts, both Republican and Democratic officials, and several independent fact-checking organizations. While there were some isolated incidents of irregularities or errors, these were addressed and did not affect the outcome of the election.
It is important to note that claims of widespread fraud without evidence can erode public trust in the democratic process and institutions, and can have serious consequences for the stability of democracy. It is important to respect the will of the voters and the integrity of the electoral process, and to work to ensure that future elections are fair, transparent, and secure.
Compare that to 63% of Republicans who still think that Biden did not win in 2020, with more than half of them believing "there's "solid evidence" the election was stolen", according to a CNN poll from 3 weeks ago...
There is a problem here. If enough people tell CGPT that rubber fins are ok CGPT will say rubbers fins are ok, and we know that's wrong statement. That's not learning, It's a pseudo learning that looks amazing for some people. It's a CHAT BOT, like my wife...
It's a a bit of a misconception that this ChatBots learn from interaction. They do not, the engine that they release is frozen in time, it does not "improve". They are designed to give (to speak roughly) semi-random replies and give the impression that they understand what you are asking. For example admitting errors.But they do not know what an error is, and they do not correct mistakes for the future. A simple example of the randomness of the replies and the lack of learning? Here's what ChatBot replied, today, to the identical query "Do I have to use soft or hard fins in Formula windsurfing?":
Formula windsurfing class rules require the use of soft fins. According to the International Formula Windsurfing Class Association (IFWC), the fins used in Formula windsurfing must be made of soft materials such as elastomers or rubbers, and they cannot have any metallic or rigid components. The use of soft fins in Formula windsurfing is intended to promote fair and safe competition by ensuring that all competitors use similar equipment that does not provide any significant advantage over others. Soft fins also offer some advantages over hard fins, such as improved performance in choppy water and reduced risk of injury in case of collision with other windsurfers or objects in the water.
Ask a different day, or just log in and/out and it will give a different answer. But I tested it and it seems pretty convinced that Joe Biden won the election in 2020!
Although it is unsure about JB vs DT in 2024:
It is difficult to predict the outcome of any election, including the 2024 United States Presidential election. However, it's important to note that at the time of my knowledge cutoff (September 2021), Donald Trump had not announced his candidacy for the 2024 Presidential election, and it's unclear whether he will decide to run.
The cut off it mentions (September 2021) is when its "knowledge" stops. You could tell it any times you want that DT is now the leading candidate and it will never "learn" it or "remember" it.
Due chat bots learn from interaction? Yes, and no.
Interactions do not update the general knowledge based. Currently, the learning is supervised, and the interactions are not supervised. Allowing updates of the general database from interactions creates a bunch of issues. One is how to filter out misinformation, but others are technical (the hardware used for training AI is usually many orders of magnitude more powerful than the hardware required to use a trained AI system).
But ChatGPT definitely does learn within a session (and possible within the context of a given user). It does apply that "new knowledge" to future answers within the same context. When I asked today "Are soft fins used in formula windsurfing?", the answer was:
No, soft fins are not used in formula windsurfing. Formula windsurfing boards typically use hard fins made of materials such as carbon or G10 fiberglass, which provide a higher degree of stiffness and performance compared to soft fins. The hard fins are designed to provide maximum lift and minimal drag, allowing for greater speed and control on the water. Soft fins, on the other hand, are typically used in beginner or freeride windsurfing boards, where ease of use and forgiveness are more important than performance.
I specifically asked about the material used for soft fins ("What are soft fins made off?"), and ChatGPT replied:
Soft fins are typically made of a flexible plastic material, such as nylon or polypropylene. These materials are chosen for their durability, resistance to wear and tear, and their ability to flex and bend under load.
It is quite important to note that freezing the knowledge base is not an absolute requirement, and neither is human supervision while learning. That's nicely illustrated if you look back at an important landmark in AI - using AI in games, and specifically to play Go. That's a game where raw "stupid" horsepower does not help computers much due to combinatorial exposion - very different from chess. For chess, computers have been beating the best human players since 1997. For Go, it took 20 years longer, until 2017. The first computer Go program that could beat professional Go players on a regular basis was Alpha Go. Alpha Go was similar to current chat bots in that is learned from existing knowledge (games played by human players) with a defined outcome (who won or lost). But the best human Go players still could beat Alpha Go.
That ended when the program was modified to train itself. Knowing only the rules of the game, Alpha Go Zero (where the "Zero" stands for zero supervision and external input) played against itself, and learned from that. After 21 days playing itself, it had learned enough to beat top professional Go players (who usually enter professional Go as young kids). It proceeded to get better and is now the best "Go player" in the world (read about it at www.deepmind.com/blog/alphago-zero-starting-from-scratch). The only contenders are other AI programs from different companies that followed a similar "Zero" approach. The programs have introduced many new ideas into Go playing, and many of the ideas have been picked up by the human professional Go players.
The training of Alpha Go and similar programs was done on rather massive hardware (or, in the case of Leela Go, by a hundreds of different computers all over the world who pooled results). The result neural networks, however, can be used on any computer, no special hardware required.
Technically, it would be possible to have a chat bot that continuously updates its knowledge base, and adds what it learns from interactions to its knowledge base. It's just a question of time before these become available. I just hope that when they do, they contain really good methods to differentiate between truth, ignorance, and intentional lies.
The edge is about how many calculations per sec could any machine do and how big is the dataset they can handle, against any human being.
They a a statistic algorithms.
Ex: they never mention about soft or hard carbon fins, and all we know that we are talking about soft and hard carbon fins because NOBODY gonna use polypropilene fins in any Formula board !!!! Not even my wife!!!
There is a problem here. If enough people tell CGPT that rubber fins are ok CGPT will say rubbers fins are ok, and we know that's wrong statement. That's not learning, It's a pseudo learning that looks amazing for some people. It's a CHAT BOT, like my wife...
That is a general problem in communications, and not limited to bots. I have a background in science, so my world view is that there is something like "truth", and that we can use scientific methods to understand what that is (albeit often in an iterative fashion that includes some steps in the wrong direction).
Politician often hold a very different world view, and some of them have utilized that people will believe what they hear if they hear it often enough. These politicians are also quite good at undermining the perceived reliability of anyone who provides contradictory information.
This approach worked well enough to convince the vast majority of half of the US population that the outcome of the last elections was different from what all data show.
BTW, I believe your statement is incorrect, since new information (what "enough people" say about rubber fins) does not update the knowledge base. Rather, it appears that pointing out errors to ChatGPT causes it to examine a problem in more depth than in the initial response, looked at the (frozen) knowledge base. I'll post an example separately.
Here's an interesting little ChatGPT experiment: I asked ChatGPT a question that I had previously asked several dozen scientists interviewing for a position in my lab. All of them had at least a B.S., some had masters or doctorates, and all had at least some basic knowledge of chemistry and statistics. The question:
"I have a small molecule with 2 functional groups at the end that can be modified, for example by attaching biotin. The modification reaction (for one group) is 60% efficient, so that 60% of the molecules will have a biotin at this group. What is the expected distribution of products on the molecule with two such groups, with respect to being labeled with biotin?"
Of the dozens of people I interviewed, not a single person came up with the correct answer (can you?). Only a few came reasonably close. A few did not understand the question at all, not even after trying to explain it in much more detail.
Here is ChatGPT's response:

This is the correct response. ChatGPT did not only understand the question, but it answered it better than dozens of college-educated job seekers I posted this question to (and those were usually the top 5 or 10% of applicants, since we typically got >50 applications per job opening).
Even more interesting is that this was the second try at this response. The first response was only partially correct: the bot did not realize that group 2. consisted of 2 distinct populations, labeled at one end or the other, so it have 24% instead of 48% there, and the numbers at the bottom were also mixed up). I did point out the inconsistencies, but got the "Something went wrong" error instead of an answer. I got the response shown above after re-loading the page, and re-generating the response.
Most of the people I ended up hiring were quite intelligent and well educated, and some were outstanding. But not a single one of them got as close as ChatGPT in the first answer, and none of them managed to get the right answer after I pointed out what was wrong with there initial answer. To me, that's more impressive than ChatGPT passing medical exams, or scoring in the 90th percentile in the (law) board exam. Actually, I find it somewhat scary.
Here's an interesting little ChatGPT experiment: I asked ChatGPT a question that I had previously asked several dozen scientists interviewing for a position in my lab. All of them had at least a B.S., some had masters or doctorates, and all had at least some basic knowledge of chemistry and statistics. The question:
"I have a small molecule with 2 functional groups at the end that can be modified, for example by attaching biotin. The modification reaction (for one group) is 60% efficient, so that 60% of the molecules will have a biotin at this group. What is the expected distribution of products on the molecule with two such groups, with respect to being labeled with biotin?"
Of the dozens of people I interviewed, not a single person came up with the correct answer (can you?). Only a few came reasonably close. A few did not understand the question at all, not even after trying to explain it in much more detail.
Here is ChatGPT's response:

This is the correct response. ChatGPT did not only understand the question, but it answered it better than dozens of college-educated job seekers I posted this question to (and those were usually the top 5 or 10% of applicants, since we typically got >50 applications per job opening).
Even more interesting is that this was the second try at this response. The first response was only partially correct: the bot did not realize that group 2. consisted of 2 distinct populations, labeled at one end or the other, so it have 24% instead of 48% there, and the numbers at the bottom were also mixed up). I did point out the inconsistencies, but got the "Something went wrong" error instead of an answer. I got the response shown above after re-loading the page, and re-generating the response.
Most of the people I ended up hiring were quite intelligent and well educated, and some were outstanding. But not a single one of them got as close as ChatGPT in the first answer, and none of them managed to get the right answer after I pointed out what was wrong with there initial answer. To me, that's more impressive than ChatGPT passing medical exams, or scoring in the 90th percentile in the (law) board exam. Actually, I find it somewhat scary.
Data crawler works better with this kind of things (statistic-math), but as you may note to my initial question CGPT4 never understood what I?m saying when I wrote "Do I have to use soft or hard fins in Formula windsurfing?"...Obviously, I was asking about "relative" Soft or Hard CARBON FINS....he miss the shot poorly .
Data crawler works better with this kind of things (statistic-math)
The math in my example is quite trivial, and a lot of the job candidates I interviewed would have had no problem answering the question if I had formulated it in purely statistical terms ("if I try something where my chance of success is 60% in each try, and try twice, what are the chances that I succeeded twice, one, or not at all?"). The amazing thing here was not that the bot came up with the correct answer - but rather, that it immediatelyunderstoodwhat the question was aiming at, and that it arrived at a good answer right away, and at the correct answer withminimaladditional input.
A lot (but not all) of the things in the answers from ChatGPT and Bard actually were correct. It's your choice to focus on the parts of the answer that were incorrect (but which the bots can correct if you drill in), or to wonder what it means that current AI programs can give reasonable answers to rather esoteric questions, and perform better than 90% of highly trained humans in difficult exams like legal bar exams. BTW, they are also very useful to produce "cut and paste" example code for programming questions, which can cut down the time solve specific problems by a factor of 5 or more, compared to "old fashioned" web searches.
Here's an interesting podcast about the question "Is artificial intelligence getting out of control?": podcasts.apple.com/us/podcast/today-in-focus/id1440133626?i=1000608535963
Here's an interesting little ChatGPT experiment: I asked ChatGPT a question that I had previously asked several dozen scientists interviewing for a position in my lab. All of them had at least a B.S., some had masters or doctorates, and all had at least some basic knowledge of chemistry and statistics. The question:
"I have a small molecule with 2 functional groups at the end that can be modified, for example by attaching biotin. The modification reaction (for one group) is 60% efficient, so that 60% of the molecules will have a biotin at this group. What is the expected distribution of products on the molecule with two such groups, with respect to being labeled with biotin?"
Of the dozens of people I interviewed, not a single person came up with the correct answer (can you?). Only a few came reasonably close. A few did not understand the question at all, not even after trying to explain it in much more detail.
Here is ChatGPT's response:

This is the correct response. ChatGPT did not only understand the question, but it answered it better than dozens of college-educated job seekers I posted this question to (and those were usually the top 5 or 10% of applicants, since we typically got >50 applications per job opening).
Even more interesting is that this was the second try at this response. The first response was only partially correct: the bot did not realize that group 2. consisted of 2 distinct populations, labeled at one end or the other, so it have 24% instead of 48% there, and the numbers at the bottom were also mixed up). I did point out the inconsistencies, but got the "Something went wrong" error instead of an answer. I got the response shown above after re-loading the page, and re-generating the response.
Most of the people I ended up hiring were quite intelligent and well educated, and some were outstanding. But not a single one of them got as close as ChatGPT in the first answer, and none of them managed to get the right answer after I pointed out what was wrong with there initial answer. To me, that's more impressive than ChatGPT passing medical exams, or scoring in the 90th percentile in the (law) board exam. Actually, I find it somewhat scary.
That has been the problem, and my experience, with all the bots I used or worked on (I worked a couple of years with a team developing an AI for music composition). That is: assuming you can formulate a precise question, the only way to know if they give the correct answer is to know the correct answer already.
Sure, it is impressive that you can have a linguistic back and forth with a program, but if you did not know the correct ratios of molecules you would have just to go home with the incorrect ones.
The reality is that unsupervised machine learning on anything that is not just sheer computational math is decades away. And even with computational math of any sort it is questionable unless the objective of the computation is VERY well defined (e.g chess, or go). Right now this wave of chat bots are fun, and can be used to maybe help in a number of applications, but fundamentally there is just a lot of hype from the usual scums in Silicon Valley trying to make a buck.
PS Your example is too trivial when the question is posed unambiguously. It is equivalent to ask: "I have a coin that has 60% chance of showing a head when tossed. What is the probability of seeing 0, 1 or 2 heads for two tosses?" That is question that many people without a little background with elementary probability theory might have difficulties to answer correctly. (Compound probability of independent events is not a simple concept) But it is an absolute triviality for anybody who did or any computational engine of sort like ChatGPT ... unless of course it does not understand the question, and gives the wrong answer, like it did at first.
Here's an interesting little ChatGPT experiment: I asked ChatGPT a question that I had previously asked several dozen scientists interviewing for a position in my lab. All of them had at least a B.S., some had masters or doctorates, and all had at least some basic knowledge of chemistry and statistics. The question:
"I have a small molecule with 2 functional groups at the end that can be modified, for example by attaching biotin. The modification reaction (for one group) is 60% efficient, so that 60% of the molecules will have a biotin at this group. What is the expected distribution of products on the molecule with two such groups, with respect to being labeled with biotin?"
Of the dozens of people I interviewed, not a single person came up with the correct answer (can you?). Only a few came reasonably close. A few did not understand the question at all, not even after trying to explain it in much more detail.
Here is ChatGPT's response:

This is the correct response. ChatGPT did not only understand the question, but it answered it better than dozens of college-educated job seekers I posted this question to (and those were usually the top 5 or 10% of applicants, since we typically got >50 applications per job opening).
Even more interesting is that this was the second try at this response. The first response was only partially correct: the bot did not realize that group 2. consisted of 2 distinct populations, labeled at one end or the other, so it have 24% instead of 48% there, and the numbers at the bottom were also mixed up). I did point out the inconsistencies, but got the "Something went wrong" error instead of an answer. I got the response shown above after re-loading the page, and re-generating the response.
Most of the people I ended up hiring were quite intelligent and well educated, and some were outstanding. But not a single one of them got as close as ChatGPT in the first answer, and none of them managed to get the right answer after I pointed out what was wrong with there initial answer. To me, that's more impressive than ChatGPT passing medical exams, or scoring in the 90th percentile in the (law) board exam. Actually, I find it somewhat scary.
That has been the problem, and my experience, with all the bots I used or worked on (I worked a couple of years with a team developing an AI for music composition). That is: assuming you can formulate a precise question, the only way to know if they give the correct answer is to know the correct answer already.
Sure, it is impressive that you can have a linguistic back and forth with a program, but if you did not know the correct ratios of molecules you would have just to go home with the incorrect ones.
The reality is that unsupervised machine learning on anything that is not just sheer computational math is decades away. And even with computational math of any sort it is questionable unless the objective of the computation is VERY well defined (e.g chess, or go). Right now this wave of chat bots are fun, and can be used to maybe help in a number of applications, but fundamentally there is just a lot of hype from the usual scums in Silicon Valley trying to make a buck.
PS Your example is too trivial when the question is posed unambiguously. It is equivalent to ask: "I have a coin that has 60% chance of showing a head when tossed. What is the probability of seeing 0, 1 or 2 heads for two tosses?" That is question that many people without a little background with elementary probability theory might have difficulties to answer correctly. (Compound probability of independent events is not a simple concept) But it is an absolute triviality for anybody who did or any computational engine of sort like ChatGPT ... unless of course it does not understand the question, and gives the wrong answer, like it did at first.
Having said all that: www.nytimes.com/interactive/2023/04/14/upshot/up-ai-uses.html
but rather, that it immediatelyunderstoodwhat the question was aiming at
You can argue about it's efficacy, accuracy and efficiency. But one thing it does not do is understand. It is not sentient.
You can argue about it's efficacy, accuracy and efficiency. But one thing it does not do is understand. It is not sentient.
There are plenty of AI experts that would disagree with you about whether or not ChatGPT is sentient. But I'll leave that to the philosophers.
As for "does it understand", however, I disagree with you. There are many different levels of understanding, which is quite obvious is you look at learning and teaching in a difficult or complex sport. It is not an "all or nothing" thing. A few examples come to my mind. One is when my karate teacher, who was a 6 dan, switched to a new teacher for his lessons, and had to do the most basic beginner form for his entire first private lesson. That's something most beginners can learn in a few lessons in their first couple of months, but there was still room for him to understand it at a deeper level after teaching it for more than a decade. Another one was my high school biology teacher, who outlined his grading requirements in his first lesson. In his classes, just memorizing facts would only get you a D. Anything else required demonstrating a deeper level of understanding, and applying that understanding to new problems. Based on his criteria, ChapGPT would be a contender for an A (which he have to less than 10% of students).
That has been the problem, and my experience, with all the bots I used or worked on (I worked a couple of years with a team developing an AI for music composition). That is: assuming you can formulate a precise question, the only way to know if they give the correct answer is to know the correct answer already.
I happen to work in a field where it is quite easy to check if the answer a chatbot gives is "correct" or not, even if you do not know the answer: software development. If you describe the issue and ask the bot for example code, you can just copy and past it to figure out if it works within a minute. So far, the results I have gotten were usually correct, and refining results is so easy and quick that it's quite easy to end up with better code than when using other methods, in less time.