Category: Higher Ed

  • The Ups and Downs of AI

    Financial

    Last month, OpenAI’s shift to a for-profit public-benefit corporation lifted Microsoft’s valuation past $4T. CEO Sam Altman continues to travel the world for funding deals, and October was a busy month for him. Technology contrarian Ed Zitron calculates OpenAI’s cash needs over the next 12 months to be $400B, but Fed Chairman Powell dispels the connection between AI funding boom and Dotcom crash: “I won’t go into particular names, but they actually have earnings.” (Fortune).

    Criticism abounded as OpenAI’s CFO opened a can of worms by suggesting government guarantees for data centers, only for Altman to walk these claims back. Critics of OpenAI are raising alarms.

    Why does this matter?

    If you have a retirement account, you’ll likely care about a potential stock market correction or crash. Aside from that, the financing of AI data centers has tentacles into other companies and industries (Oracle, Google, Nvidia, Meta, Microsoft, power companies, etc.), so downturns and bankruptcies would likely lead to market disruption. For higher education, there are considerations about AI model pricing, and stock market fluctuations can affect giving to non-profit organizations.

    AI & Productivity

    Amazon & UPS announced layoffs at the end of last month. From Amazon SVP, Beth Galetti: “This generation of AI is the most transformative technology we’ve seen since the Internet, and it’s enabling companies to innovate much faster than ever before (in existing market segments and altogether new ones).” Analyst Gil Luria suggests “companies appear to be making the cuts partly to hold their overall profit margins steady while they spend tens of billions of dollars on A.I. infrastructure like data centers. Cutting back on employees is a way to convince shareholders.”

    But Luria also notes: “[w]e do think that at some point A.I. tools will allow us to enhance productivity to a point that we’re going to need less labor, but we’re not there yet, not in any significant way.” But another way of thinking about AI & productivity is not merely task augmentation but as something that enables creativity. From developer Aaron Boodman:

    “Claude doesn’t make me much faster on the work that I am an expert on. Maybe 15-20% depending on the day. It’s the work that I don’t know how to do and would have to research.

    Or the grunge work I don’t even want to do. On this it is hard to even put a number on.

    Many of the projects I do with Claude day to day I just wouldn’t have done at all pre-Claude. Infinity% improvement in productivity on those.

    (Emphasis mine)

    Why does this matter?

    As I mentioned in an earlier post, the potential of a J curve for AI productivity gains is one that some economists suggest. Although productivity gains aren’t yet visible, there is growing anecdotal data to suggest structural changes in work, particularly in visual and technical fields. 

    AI & Higher Education

    Wharton Human-AI Research reported that many enterprises have incorporated AI tools into employees’ daily work and are no longer exploratory in nature.

    Higher ed, meanwhile, is not using AI to the same degree. Only 2% of Student Success Leaders say their institutions are very effective at using AI. Their measure is subjective, but the picture is suggestive that AI adoption in higher education is slower than in industry (for good or for ill). Higher ed Leaders are exploring governance and policy, a task likely to be difficult for wrangling fast-moving AI technological advancements. 

    What does this matter?

    Universities continue to explore using AI, but at a pace slower than industry. There are opportunities for universities to participate in both the conversations framing the use of AI and the practical use of the tools.

  • Tuesday AI Links (Nov. 4)

  • Resisting AI?

    Dan McQuillan writes, The role of the University is to resist AI,following themes from Ivan Illich’s ‘Tools for Conviviality’.

    It’s a scathing overview with points that I think many others wonder about (although in less concrete ways than McQuillan).

    Contemporary AI is a specific mode of connectionist computation based on neural networks and transformer models. AI is also a tool in Illich’s sense; at the same time, an arrangement of institutions, investments and claims. One benefit of listening to industry podcasts, as I do, is the openness of the engineers when they admit that no-one really knows what’s going on inside these models.

    Let that sink in for a moment: we’re in the midst of a giant social experiment that pivots around a technology whose inner workings are unpredictable and opaque.

    The highlight is mine. I agree that there’s something disconcerting about using systems that we don’t understand fully.

    Generative AI’s main impact on higher education has been to cause panic about students cheating, a panic that diverts attention from the already immiserated experience of marketised studenthood. It’s also caused increasing alarm about staff cheating, via AI marking and feedback, which again diverts attention from their experience of relentless and ongoing precaritisation.

    The hegemonic narrative calls for universities to embrace these tools as a way to revitalise pedagogy, and because students will need AI skills in the world of work. A major flaw with this story is that the tools don’t actually work, or at least not as claimed.

    AI summarisation doesn’t summarise; it simulates a summary based on the learned parameters of its model. AI research tools don’t research; they shove a lot of searched-up docs into the chatbot context in the hope that will trigger relevancy. For their part, so-called reasoning models ramp up inference costs while confabulating a chain of thought to cover up their glaring limitations.

    I think there are philosophical questions here worth considering. Specifically, the postulation that AI simply “simulates” is too simple and not helpful. What is a photograph? It’s a real thing, but not the real thing captured on the image. What is a video played on a computer screen? It’s a real thing, but it’s not the real thing. The photo and screen simulate the real world, but I’m not aware of modern philosophers critiquing these forms of media. (I’d suspect that earlier media theorists did just that until the media was accepted en masse by society.)

    He goes on to cite environmental concerns (although as I posted recently, the questions of water consumption are exaggerated) among things we’re well suited to take heed of. His language is perhaps a bit too revolutionary.

    As for people’s councils — I am less sanguine that these have much utility.

    Instead of waiting for a liberal rules-based order to magically appear, we need to find other ways to organise to put convivial constraints into practice. I suggest that a workers’ or people’s council on AI can be constituted in any context to carry out the kinds of technosocial inquiry advocated for by Illich, that the act of doing so prefigures the very forms of independent thought which are undermined by AI’s apparatus, and manifests the kind of careful, contextual and relational approach that is erased by AI’s normative scaling.

    I suspect that people’s councils are glorified committees — structures that are kabuki theater than anything else and will struggle to align with the speed at which AI tools are emerging.

    The role of the university isn’t to roll over in the face of tall tales about technological inevitability, but to model the forms of critical pedagogy that underpin the social defence against authoritarianism and which makes space to reimagine the other worlds that are still possible.

    I don’t share all of his fears, but it’s important to consider voices that may not align with a techno-optimistic future.

  • NMSU to offer Bachelor of Science in AI

    New Mexico State University will offer AI an degree starting in 2026. I expect to see more degrees like as universities begin to incorporate AI-specific learning.

    From their press release:

    AI jobs are those where a significant portion of the tasks can be performed or aided by artificial intelligence. This means that it is likely to impact the way these jobs are done, potentially leading to automation, new job roles or changes in required skills. Pontelli emphasized AI should be viewed as an opportunity.