• 1 Post
  • 48 Comments
Joined 10 months ago
cake
Cake day: December 11th, 2023

help-circle




  • zerakith@lemmy.mltoScience Memes@mander.xyzpringles
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I’m pretty sure it’s real. I met someone once who worked in materials research for food and they said that modelling was big there because the scope for experimentation is more limited. In materials for construction where they wanted to change a property they could play around with adding new additives and seeing what happens. For food though you can’t add anything beyond a limited set of chemicals that already have approval from the various agencies* and therefore they look at trying to fine tune in other ways.

    So for chocolate, for example, they control lots of material properties by very careful control of temperature and pressure as it solidifies. This is why if chocolate melts and resolidifies you see the white bits of milk that don’t remain within the materia.

    *Okay you can add a new chemical but that means a time frame of over a decade to then get approval. I think the number of chemicals that’s happened to is very very small and that’s partly because the innovation framework of capitalism is very short term.







  • This is a consistent misunderstanding problem I wish people understood.

    Manufacturing things creates emissions. It costs energy and materials. Something could have absolutely no emissions in usage and still be problematic when done on growing scales because the manufacture costs energy emissions and resources. Hard drives wear out and die and need replacing. Researchers know how to account for this its a life cycle assessment calculation they aren’t perfect but this is robust work.

    IT is up to 4% of global emissions and the sector is growing. People consistently act as if there is no footprint to digital media and there is. https://www.sciencedirect.com/science/article/pii/S2666389921001884

    Yes the headline is a little silly but we actually do need think strategically about the sector and that starts by actually realising it has an impact and asking ourselves what are the priorities that we went to save whilst we decarbonise the industry that supports it.

    There’s no wiggle room left - no sector or set of behaviours that can afford to be given slack. We are in the biggest race of our life’s and the stake are incomprehensibly huge.


  • The answer to your questions are: yes it’s a different baseline to the one chosen by the Paris agreement, different baselines are chosen for relevant to different elements of the issue. Likely the baseline chosen in your link is down to what reliable data they have and so they choose a baseline from a region of data they have rather than going to other sources. This website provides the latest years official record in Paris Terms I would expect the next one (2024) to be much closer to 1.5°C. On (2) I agree that current measurements suggest an instantaneous/yearly temperature around 1.5°C against the relevant baseline. On (3) you are right that the trend is unlikely to change because it comes from radiative forcing (emissions) that have already occurred so even with sudden zero human emissions we would see an increase or best case a leveling (before maybe long term it can decline as CO2 is naturally removed from the atmosphere or faster if humans find a way of doing so at scale). A trend however is already an average of several time points and you can see in the link you said that year on year variation on that number can be as high as say ~0.3°C. This comes about from non-GHG forcing elements of the system (such as El Niño) that add natural variation. So already you could see 2019-> dropped by 0.2°C even though the trend is up. So you could expect us to potentially drop back down to say 1.2°C for a few years before it goes up again. The link above suggests the best data we have we would likely breach 1.5°C by 2031 so not long at all.

    This sounds like a pedantic point but it’s actually quite important for the climate and the confusion stems back to how the problem and climate science was chosen to be communicated. Temperature was chosen in part because it’s a proxy variable of other parts of the system that are what control the system impacts and it was felt that Temperature would be “naturally understandable” by the general population (and politicians…). This had a bit of a backfire because 1.5°C is not a lot of different when considered in say a room and it highlights why this variable is different and why it matters that it’s decadal average rather than a yearly. So if temperature is only a proxy then what are the variables that control the outputs? One key one is the total heat energy stored in different earth systems and there the size of the storage medium matters (so the reason 1.5°C on the world is a lot but on a room isn’t is because the sheer volume of the earth you have to have a huge amount more energy). The other place where Surface Temperature adds confusion and complexity is because of the oceans: the oceans have been absorbing some of the heat and that hasn’t always been visible to us (as we don’t live in the ocean) so if we stopped emitting today the ocean may then deposit some of that heat energy back into the atmosphere so it’s a complex interaction. What we really need to know is what the additional level of radiative forcing and how much additional heat energy swimming about in Earth’s systems - that is what will control the experience we have of the climate. Greenhouse gases act to stop Earth cooling back down by radiating out to space which is why the effect is cumulative so the difference between a sustained year on year 1.5°C and something that averages less but has a few years of 1.5°C is quite high because they will be different amounts of total energy in the system as a result.

    So, the short answer is that the Paris agreement targets are set on the basis on what a decadal rise of 1.5°C by 2100 (i.e the average 2090-2100) means in terms of the excess heat energy and radiative forcing in the system. The limit itself is somewhat arbitrary driven in part by the fact we were at ~1°C when it was agreed and 2°C seemed like a reasonable estimate of something we might be able to limit it to. The origin of 1.5°C rather than 2°C is actually quite interesting and highlights a lot about how climate change policy has been decided but this post is long enough.

    This is a good point. The sheet apocalyptic magnitude of the problem means that every tiny amount of change matters. Billions will die. There probably isn’t a way to prevent that completely anymore. But if we can tick things down by a fraction and save a few hundred thousand people, preserve a species of food crops that would have gone extinct, IDK what the exact outcomes are but the point is tiny changes will have a massive impact and they’re important even if the situation is dire.

    Agreed, I think this is the right way of thinking about it and the risk of having communicated it to the world as a binary target of 1.5C/2C we risk people completely switching off if/when we finally confirm we’ve breached it when the reality is it should embolden us further not demoralise us. This is my number one concern at the moment. I would also add that what we doing is “pushing” a system away from it’s natural equilibrium and if we push hard enough we might find that we find changes in the system itself which are very hard or impossible to undo. So it’s more than just more increase more damages it’s also about risks of fundamentally and permanently changing the system.

    A potential energy surface with local and global minima to demonstrate how forcing can shift the fundamental equilibrium the system operates in

    As an analogy think of the ball in the well of this local minima and we push it back and forth. If we hit it hard enough rather than come back it goes and finds another minima which is just a whole different system than we are used to. These are sometimes called tipping points and the frustrating thing about the complexity of the systems is we don’t and can’t know for sure where those points are (although we do know they increase heavily as you move above 1.5C upwards). They by definition are hard to model because models are built up from prior experience (data) and these are in part unprecedented changes in the atmospheric records.

    A slide about tippings showing how it's like a game of minesweeper where each layer we "dig" down (more temperature increase) the more "mines" (tipping points) we risk hitting.

    I haven’t mentioned “negative emissions” technologies but it is worth saying in principle you could have a situation where we are able to do significant negative emissions and that might mean we could end up with 1.5C in 2100 whilst having a period of time above it but negative emissions technologies could be a whole other rant. Worth noting though that lots of the pathways that show we could just about keep to 1.5C do rely on negative emissions to different degrees (though also the pathways are limited in how much they think we might be able to push our economic systems).


  • I see this misconception a lot and it’s really unfortunate. We aren’t at what climate scientists call 1.5°C. Being at 1.5°C in the means the average anonomly being over 1.5 for a period of decades. It isn’t just a case of scientists being cautious it a completely different impact in the climate. It implies different amounts of impacts and different levels of heat energy in the whole system.

    Yes we have hit 1.5°C over the last 12months partly down to el nino which is expected to subside shortly. Though there is some discussion about whether this year was an expected randomly anonomly or whether it suggests some feedback loop that’s been underestimated but we can’t know until enough time has passed (maybe a year).

    All that just means both that the impacts we are already saying are less worse than you’d expect at long term 1.5°C and therefore we should be extremely worried but also that we have factored that in in our estimates of what outcomes are possible (though the 1.5°C window is increasingly narrow because as you say we still have our foot on the gas). So there is still time to make an impact and every fraction of a degree and kg of CO2 matters.



  • Just to be clear I wasn’t being feacious genuinely curious as to the specifics as I’m not as familiar with haulage.

    I suspect there is an argument that we’ve made cargo transport too cheap and its skewed the economics of local vs outsourced production.

    My preference would be pantograph systems on the motorways and main routes which we could roll out quite quickly and remove the majority of emissions coupled with a systemic look at our material needs and production capacities locally with a view to lowering volumes

    The Silvertown tunnel (and lower thames crossing) in London would be a good example where we are rebuilding our infrastructure along the lines of sustained and increased haulage along certain routes at great public expense so I guess this could be considered an indirect subsidy.



  • I won’t rehash the arguments around “AI” that others are best placed to make.

    My main issue is AI as a term is basically a marketing one to convince people that these tools do something they don’t and its causing real harm. Its redirecting resources and attention onto a very narrow subset of tools replacing other less intensive tools. There are significant impacts to these tools (during an existential crisis around our use and consumption of energy). There are some really good targeted uses of machine learning techniques but they are being drowned out by a hype train that is determined to make the general public think that we have or are near Data from Star Trek.

    Addtionally, as others have said the current state of “AI” has a very anti FOSS ethos. With big firms using and misusing their monopolies to steal, borrow and coopt data that isn’t theirs to build something that contains that’s data but is their copyright. Some of this data is intensely personal and sensitive and the original intent behind the sharing is not for training a model which may in certain circumstances spit out that data verbatim.

    Lastly, since you use the term Luddite. Its worth actually engaging with what that movement was about. Whilst its pitched now as generic anti-technology backlash in fact it was a movement of people who saw what the priorities and choices in the new technology meant for them: the people that didn’t own the technology and would get worse living and work conditions as a result. As it turned out they were almost exactly correct in thier predictions. They are indeed worth thinking about as allegory for the moment we find ourselves in. How do ordinary people want this technology to change our lives? Who do we want to control it? Given its implications for our climate needs can we afford to use it now, if so for what purposes?

    Personally, I can’t wait for the hype train to pop (or maybe depart?) so we can get back to rational discussions about the best uses of machine learning (and computing in general) for the betterment of all rather than the enrichment of a few.


  • Not irrational to be concerned for a number of reasons. Even if local and secure AI image processing and LLMs add fairly significant processing costs to a simple task like this. It means higher requirements for the browser, higher energy use and therefore emissions (noting here that AI has blown Microsoft’s climate mitigation plan our of the water even with some accounting tricks).

    Additionally, you have to think about the long term changes to behaviours this will generate. A handy tool for when people forget to produce proper accessible documents suddenly becomes the default way of making accessible documents. Consider two situations: a culture that promotes and enforces content providers to consider different types of consumer and how they will experience the content; they know that unless they spend the 1% extra time making it accessibile for all it will exclude certain people. Now compare that to a situation where AI is pitched as an easy way not to think about the peoples experiences: the AI will sort it. Those two situations imply very different outcomes: in one there is care and thought about difference and diversity and in another there isn’t. Disabled people are an after thought. Within those two different scenarios there’s also massively different energy and emissions requirements because its making every user perform AI to get some alt text rather than generate it at source.

    Finally, it worth explaining about Alt texts a bit and how people use them because its not just text descriptions of an image (which AI could indeed likely produce). Alt texts should be used to summarise the salient aspects of the image the author wants a reader to take away for it in a conscise way and sometimes that message might be slightly different for Alt Text users. AI can’t do this because it should be about the message the content creator wants to send and ensuring it’s accessible. As ever with these tech fixes for accessibility the lived experience of people with those needs isn’t actually present. Its an assumed need rather than what they are asking for.