• 1 Post
  • 310 Comments
Joined 2 years ago
cake
Cake day: December 9th, 2023

help-circle
  • I have not claimed that, I said that AI algorithms are likely to be part of our climate solutions and our ability to serve more people with less manual labour. They help to solve entirely new classes of problems and can do so far more efficiently than years of human labour.

    hahaha like AI will be a part of climate solutions are you serious right now?

    Y’all are incapable of understanding expertise in your domain does not make you an expert in everything else, there is no way anyone in the industry you are speaking about will listen to climatologists and environmental scientists long enough to even begin to be helpful.

    You keep talking about technology, when this is really a discussion about the catastrophic myopia of the tech industry of which you are making yourself a perfect example of.

    https://www.goldmansachs.com/insights/articles/how-ai-is-transforming-data-centers-and-ramping-up-power-demand

    To some biologists, that approach leaves the protein folding problem incomplete. From the earliest days of structural biology, researchers hoped to learn the rules of how an amino acid string folds into a protein. With AlphaFold2, most biologists agree that the structure prediction problem is solved. However, the protein folding problem is not. “Right now, you just have this black box that can somehow tell you the folded states, but not actually how you get there,” Zhong said.

    “It’s not solved the way a scientist would solve it,” said Littman, the Brown University computer scientist.

    This might sound like “semantic quibbling,” said George Rose, the biophysics professor emeritus at Johns Hopkins. “But of course it isn’t.” AlphaFold2 can recognize patterns in how a given amino acid sequence might fold up based on its analysis of hundreds of thousands of protein structures. But it can’t tell scientists anything about the protein folding process.

    AlphaFold2’s success was founded on the availability of training data — hundreds of thousands of protein structures meticulously determined by the hands of patient experimentalists. While AlphaFold3 and related algorithms have shown some success in determining the structures of molecular compounds, their accuracy lags behind that of their single-protein predecessors. That’s in part because there is significantly less training data available.

    The protein folding problem was “almost a perfect example for an AI solution,” Thornton said, because the algorithm could train on hundreds of thousands of protein structures collected in a uniform way. However, the Protein Data Bank may be an unusual example of organized data sharing in biology. Without high-quality data to train algorithms, they won’t make accurate predictions.

    “We got lucky,” Jumper said. “We met the problem at the time it was ready to be solved.”

    https://www.quantamagazine.org/how-ai-revolutionized-protein-science-but-didnt-end-it-20240626/

    However, it should be noted that due to the intrinsic nature of AI, its success is not due to conceptual advancement and has not hitherto provided new intellectual interpretive models for the scientific community. If these considerations are placed in Kuhn’s framework of scientific revolution [68], AF release is a revolution without any paradigm change. Instead of “providing model problems and solutions for a community of practitioners” [68], it is a rather effective tool for solving a fundamental scientific problem.

    https://pmc.ncbi.nlm.nih.gov/articles/PMC12109453/

    This is because scientists working on AI (myself included) often work backwards. Instead of identifying a problem and then trying to find a solution, we start by assuming that AI will be the solution and then looking for problems to solve. But because it’s difficult to identify open scientific challenges that can be solved using AI, this “hammer in search of a nail” style of science means that researchers will often tackle problems which are suitable for using AI but which either have already been solved or don’t create new scientific knowledge.

    ^ this is NOT the scientific method and it undermines the scientific integrity of the entire process

    https://www.understandingai.org/p/i-got-fooled-by-ai-for-science-hypeheres

    https://www.scilifelab.se/news/alphafold3-early-pain-points-overshadow-potential-promise/

    https://www.reddit.com/r/biotech/comments/1d1096g/ai_for_drug_discovery/

    https://www.reddit.com/r/Biochemistry/comments/1gui8n8/what_can_alphafold_teach_us_about_the_impact_of/

    https://www.reddit.com/r/Biochemistry/comments/1j47wqy/thoughts_on_the_recent_veritasium_video_about/

    https://www.reddit.com/r/labrats/comments/1b1l68p/people_are_overestimating_alphafold_and_its_a/


  • I am treating you like a child because you refuse to use your brain.

    You gave me one obscure very early stage example that isn’t even connected to the overall rise in value of LLMs and other forms of AI that has created an economic bubble worse than the dotcom bubble. So you are claiming the next real AI revolution is justtttt around the corner with a totally new technology you swear?

    Maybe?

    What I do know for sure is you are far more interested in that maybe than you are in actually engaging with the existential real world problems we are facing right now…


  • You need to take a step back and realize how warped your perception of reality has gotten.

    Sure LLMs and other forms of automation, artificial intelligence and brute forcing of scientific problems will continute to grow.

    What you are talking about though is extrapolating from that to a massive shift that just isn’t on the horizon. You are delusional, you have read too many scifi books about AI and can’t get your brain off of that way of thinking being the future no matter how dystopian it is.

    The value to AI just simply isn’t there, and that is before you even include the context of the ecological holocaust it is causing and enabling by getting countries all over the world to abandon critical carbon footprint reduction goals.

    Don’t come at me like you are being logical here, at least admit that this is the cool scifi tech dystopia you wanted and have been obsessed with. This is the only way you get to this point of delusion since the rest of us see these technologies and go “huh, that looks like it has some use” whereas people like you have what is essentially a religious view towards AI and it is pathetic and offensive towards religions that actually have substance to their philosophy and beliefs from my perspective.

    The rich are using the gullibility of people like you to pump and dump entire economies you fool.

    Edit I am not sure why I wrote this like you might actually take a step back, you won’t, this message is really for everyone else to help emphasize how we are having the interests of the entire earth derailed by the advent of a shitty religion and its mindless disciples. The sooner the rest of us get on the same page, the sooner we can resist people like you and keep your rigid broken worldviews from destroying our futures.














  • This is one of the dumbest ideas I have ever heard in my life.

    In particular if I am online and neurotypical toxic people are thrown off by my ADHD, the VERY FIRST THING they do in their intellectual laziness is assume I am a kid.

    The idea that this isn’t considered an explicit oppression of neurodivergent adults is laughable to me. These systems will police for neurodivergence, not age. The obvious response from the center will be “just don’t act like a kid then” and I would like to send a pre-emptive “go fuck yourself you are destroying everything” to those people.


  • supersquirrel@sopuli.xyztoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    29 days ago

    If AI worked the way techbros think it does it would be an affront to god that intelligence was so easy to artificially make. If you believe in god you likely believe humans are special creations of god but then why would god build our brains in such an inefficient, wildly overcomplicated manner if sentience and intelligence were so trivially easy to do it would only take a bunch of computer bros less than 100 years to build a far simpler machine that can achieve a similar and surpassing intelligence?

    If you do not believe in god you are an idiot if you think techbros can outsmart hundreds and hundreds of millions and millions of years of evolution in a couple of decades of hamfistedly hacking away at concepts while ignoring the necessary integrative knowledge from other fields like the humanities that is a prerequisite to even setting the proper goals in the first place in the process of creating artificial intelligence.

    These are simply pattern matching tools with a limited degree of context memory that you can interact with in plain english language. Further, these machines are even worse logic machines than humans are and as much as logic isn’t popular these days, it is a VERY necessary underpinning element to functional intelligence in any real context.


  • I think this kind of critical analysis of the Fediverse could be completely right in every single one of the details and still miss the more important point that corporate social networks are being used in a directly hostile fashion towards vulnerable people RIGHT NOW to a near catastrophic degree of negligence to put things in the most charitable terms possible. Further the people who own those corporations publicly endorse narratives that invisiblize the violence happening to real human beings.

    Realize that by getting lost in a baseball stats esque evaluation of the Fediverse that we cede ground already to people who are disengenous. We have to consider the context of the alternative reality of corporate social media to fairly evaluate the Fediverse.


  • What do you mean by cancel culture?

    I feel like you are mistaking all acts of boycotting or mass comment submittal for “cancel culture”.

    I am not arguing for DDOSing Wikipedia, to edit articles with a hostile intent, or of smearing Wikipedia people in public places…

    …I am arguing for organizing a campaign to submit feedback on the articles about the Fediverse FROM people on the Fediverse that explain in their own words why they think the way Wikipedia describes the Fediverse is incomplete, problematic and misleading.

    Those are two VERY different things and I see no danger in slipping into “Cancel Culture” because the basic objective isn’t to silence, hurt or destroy something it is to correct the narrative ABOUT US being pushed by a prominent source of information that should be beholden to people coming to it and saying “this isn’t right what you wrote about me”. They can disagree, but the more of us that argue the point in a genuine and substantiated way the harder it gets to ignore us and keep the distorted narrative intact.


  • 1.) This is part of the background narratives being pushed by the rich and powerful that we need AI and big tech to moderate us when the opposite is true, we need more humans involved in moderation who have a stake in their community.

    2.) The prevailing winds in the tech journalism sphere have always been strangely blowing against the Fediverse since the beginning. The simplest possible explanation to me is there is a lot of money in writing off the Fediverse as a cool nerdy space that nonetheless is an unrealistic solution for everybody else and pushing the axiom that a Harvard MBA is needed to translate the Fediverse into a product the public can actually use.

    You will NOT notice this same prevailing winds against for profit corporate social networks like Bluesky and Threads… and it is a curious thing isn’t it…