
I find myself in a disposable hotel, a place that could not have existed 10-15 years, and most likely will not exist again at some point in the near to medium term future when we are forced to consider more what we choose to build and consume.
I find myself in a disposable hotel, a place that could not have existed 10-15 years, and most likely will not exist again at some point in the near to medium term future when we are forced to consider more what we choose to build and consume.
I recently returned from a 5 day trip to Shanghai, my first visit to the city and the country since late 2019. It was interesting and instructive being back. While it was a short and incomplete visit, I thought it might be worthwhile sharing some observations, especially given that the vast majority of content on China is being produced at distance and with little direct engagement.
Substack has been constantly reminding me of my failure to publish new notes. This is not exactly surprising. It is a model that encourages a ‘pump and dump’ approach to writing, the content monster is always hungry.
In a prior note, I reflected on ‘time, for what purpose?’. One prompt for it was the Japanese term, ‘time performance’, used to evaluate the supposed value of time spent on an activity. As with so much of the present moment, it is both logically consistent and absurd.
In Japan, the sakura are falling. Perhaps elsewhere, too.
My aim in this series is to think through the likely trajectory of current AI-technologies, and what some of the potential ramifications might be (notes one and two). In doing so, I’m not attempting to assess how powerful these current models are or could become, but assuming that they have reached a stage of development consequential enough to be impactful.
In the first note thinking through the arrival of ChatGPT and the incipient AI arms race, I suggested Shoshana Zuboff’s work on surveillance capitalism as an important reference point. The behaviour of big tech over the last 15 years offers a clear track record of what we might expect, and it does not look promising.
Prior to the pandemic, I had commenced work on a project that would consider Artificial Intelligence (AI) safety practices in reference to the development of nuclear power. At that time I was struck by the comparatively limited work done on AI safety, considering the potential downside risks.
Below a selection of some news pieces, articles and podcasts that have caught my attention. I might try doing notes like these more regularly if there is interest. The full name is ‘Very Intense Tropical Cyclone Freddy’. From FT: What is remarkable is that the cyclone circled around and hit Madagascar and Moazambique twice . It came back for a second shot. And add in that Malawi is experiencing its worst cholera outbreak on record.
I was fortunate to have the opportunity to chat with Andrew Keen as part of his ‘Keen on’ podcast series. The prompt for our discussion was my recent note: From that, we have a wide ranging conversation about a growing array of examples of institutional failure and state fragility.
Simone de Beauvoir, The Ethics of Ambiguity (1947): - Robert Oppenheimer, Hearing Before Personnel Security Board (1954): - Bill Joy, ‘Why the Future Doesn’t Need Us’ (2000): - Victoria Krakovna, ‘Risks from general artificial intelligence without an intelligence explosion’ (2015): - Ezra Klein, ‘This Changes Everything’ (2023): - Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, ‘ChatGPT Heralds an Intellectual Revolution’ (2023):