Flashield's Blog

Just For My Daily Diary

Flashield's Blog

Just For My Daily Diary

Month: September 2024

05.course-model-cards【模型卡】

Introduction 简介 A model card is a short document that provides key information about a machine learning model. Model cards increase transparency by communicating information about trained models to broad audiences. 模型卡 是一份简短的文档,提供有关机器学习模型的关键信息。模型卡通过向广大受众传达有关训练模型的信息来提高透明度。 In this tutorial, you will learn about which audiences to write a model card for and which sections a model card should […]

04.exercise-ai-fairness【练习:人工智能公平】

This notebook is an exercise in the AI Ethics course. You can reference the tutorial at this link. In the tutorial, you learned about different ways of measuring fairness of a machine learning model. In this exercise, you’ll train a few models to approve (or deny) credit card applications and analyze fairness. Don’t worry if […]

04.course-ai-fairness【人工智能公平】

Introduction 简介 There are many different ways of defining what we might look for in a fair machine learning (ML) model. For instance, say we’re working with a model that approves (or denies) credit card applications. Is it: 定义公平机器学习 (ML) 模型中我们可能寻找的内容有很多不同的方法。例如,假设我们正在使用一个批准(或拒绝)信用卡申请的模型。它是: fair if the approval rate is equal across genders, or is it 如果批准率在性别之间相等,是否公平,或者 better […]

03.exercise-identifying-bias-in-ai【练习:识别AI中的偏见】

This notebook is an exercise in the AI Ethics course. You can reference the tutorial at this link. In the tutorial, you learned about six different types of bias. In this exercise, you’ll train a model with real data and get practice with identifying bias. Don’t worry if you’re new to coding: you’ll still be […]

03.course-identifying-bias-in-ai【识别AI中的偏见】

Introduction 简介 Machine learning (ML) has the potential to improve lives, but it can also be a source of harm. ML applications have discriminated against individuals on the basis of race, sex, religion, socioeconomic status, and other categories. 机器学习 (ML) 有可能改善生活,但也可能造成伤害。ML 应用程序会根据种族、性别、宗教、社会经济地位和其他类别歧视个人。 In this tutorial, you’ll learn about bias, which refers to negative, unwanted consequences […]

02.exercise-human-centered-design-for-ai【练习:以人为本的人工智能设计】

This notebook is an exercise in the AI Ethics course. You can reference the tutorial at this link. In the tutorial, you learned about human-centered design (HCD) and became familiar with six general steps to apply it to AI systems. In this exercise, you will identify and address design issues in six interesting AI use […]

02.course-human-centered-design-for-ai【以人为本的人工智能设计】

Introduction 简介 Before selecting data and training models, it is important to carefully consider the human needs an AI system should serve – and if it should be built at all. 在选择数据和训练模型之前,重要的是要仔细考虑 AI 系统应该满足的人类需求 – 以及是否应该构建它。 Human-centered design (HCD) is an approach to designing systems that serve people’s needs. 以人为本的设计 (HCD) 是一种设计满足人们需求的系统的方法。 In this tutorial, […]

Scroll to top