Standaard Boekhandel gebruikt cookies en gelijkaardige technologieën om de website goed te laten werken en je een betere surfervaring te bezorgen.
Hieronder kan je kiezen welke cookies je wilt inschakelen:
Technische en functionele cookies
Deze cookies zijn essentieel om de website goed te laten functioneren, en laten je toe om bijvoorbeeld in te loggen. Je kan deze cookies niet uitschakelen.
Analytische cookies
Deze cookies verzamelen anonieme informatie over het gebruik van onze website. Op die manier kunnen we de website beter afstemmen op de behoeften van de gebruikers.
Marketingcookies
Deze cookies delen je gedrag op onze website met externe partijen, zodat je op externe platformen relevantere advertenties van Standaard Boekhandel te zien krijgt.
Je kan maximaal 250 producten tegelijk aan je winkelmandje toevoegen. Verwijdere enkele producten uit je winkelmandje, of splits je bestelling op in meerdere bestellingen.
This book provides a systematic and in-depth introduction to machine unlearning (MU) for foundation models, framed through an optimization-model-data tri-design perspective and complemented by assessments and applications. As foundation models are continuously adapted and reused, the ability to selectively remove unwanted data, knowledge, or model behavior, without full retraining, poses new theoretical and practical challenges. Thus, MU has become a critical capability for trustworthy, deployable, and regulation-ready artificial intelligence. From the optimization viewpoint, this book treats unlearning as a multi-objective and often adversarial problem that must simultaneously enforce targeted forgetting, preserve model utility, resist recovery attacks, and remain computationally efficient. From the model perspective, the book examines how knowledge is distributed across layers and latent subspaces, motivating modular and localized unlearning. From the data perspective, the book explores forget-set construction, data attribution, corruption, and coresets as key drivers of reliable forgetting. Bridging theory and practice, the book also provides a comprehensive review of benchmark datasets and evaluation metrics for machine unlearning, critically examining their strengths and limitations. The authors further survey a wide range of applications in computer vision and large language models, including AI safety, privacy, fairness, and industrial deployment, highlighting why post-training model modification is often preferred over repeated retraining in real-world systems. By unifying optimization, model, data, evaluation, and application perspectives, this book offers both a foundational framework and a practical toolkit for designing machine unlearning methods that are effective, robust, and ready for large-scale, regulated deployment.