LINKDING

Shared bookmarks

Shared bookmarks

  • #AI #freesoftware #accountability #AIrisks #open | This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector. A review of the literature on artificial intelligence systems to examine openness reveals that open AI systems are actually closed, as they are highly dependent on the resources of a few large corporate actors.
    Yesterday | View Shared by teclista
  • #AI #AIrisks #framing #messaging #narrative | ARTIFICIAL INTELLIGENCE A Messaging Guide What is this? This document outlines some initial principles for communicating about the subject of AI, in particular for broadcast media appearances. As a fast moving subject, this will be a living document, and regularly updated and improved. It has b...
    4 months ago | View Shared by teclista
  • #AI #AIrisks #generativeAI | This CETaS Briefing Paper examines how Generative AI (GenAI) systems may enhance malicious actors’ capabilities in generating malicious code, radicalisation, and weapon instruction and attack planning. It synthesises insights from government practitioners and literature to forecast inflection points where GenAI could significantly increase societal risks. Unlike previous studies focused on technological change, this paper also considers malicious actors’ readiness to adopt technology and systemic factors shaping the broader context. It argues for a sociotechnical approach to AI system evaluation, urging national security and law enforcement to understand how malicious actors' methods may evolve with GenAI
    4 months ago | View Shared by teclista

User


Tags