Flashield's Blog

Just For My Daily Diary

Flashield's Blog

Just For My Daily Diary

Year: 2024

01.course-hello-seaborn【你好seaborn】

Welcome to Data Visualization! 欢迎来到数据可视化! In this hands-on course, you’ll learn how to take your data visualizations to the next level with seaborn, a powerful but easy-to-use data visualization tool. To use seaborn, you’ll also learn a bit about how to write code in Python, a popular programming language. That said, 在本实践课程中,您将学习如何使用 seaborn 将数据可视化提升到一个新的水平,这是一个功能强大但易于使用的数据 可视化工具。 […]

07.exercise-data-leakage【练习:数据泄漏】

This notebook is an exercise in the Intermediate Machine Learning course. You can reference the tutorial at this link. Most people find target leakage very tricky until they’ve thought about it for a long time. 大多数人都认为目标泄漏非常棘手,直到他们思考了很长时间。 So, before trying to think about leakage in the housing price example, we’ll go through a few examples in […]

07.course-data-leakage【数据泄露】

In this tutorial, you will learn what data leakage is and how to prevent it. If you don’t know how to prevent it, leakage will come up frequently, and it will ruin your models in subtle and dangerous ways. So, this is one of the most important concepts for practicing data scientists. 在本教程中,您将了解什么是数据泄露以及如何防止它。 如果您不知道如何预防,泄漏就会频繁发生,并且会以微妙而危险的方式毁掉您的模型。 因此,这是数据科学家实践中最重要的概念之一。 […]

06.exercise-xgboost【练习:XGBoost】

This notebook is an exercise in the Intermediate Machine Learning course. You can reference the tutorial at this link. In this exercise, you will use your new knowledge to train a model with gradient boosting. 在本练习中,您将使用新知识来训练梯度提升的模型。 Setup 设置 The questions below will give you feedback on your work. Run the following cell to set up […]

06.course-xgboost【XGBoost】

In this tutorial, you will learn how to build and optimize models with gradient boosting. This method dominates many Kaggle competitions and achieves state-of-the-art results on a variety of datasets. 在本教程中,您将学习如何使用梯度提升构建和优化模型。 该方法在许多 Kaggle 竞赛中占据主导地位,并在各种数据集上取得了最好的结果。 Introduction 介绍 For much of this course, you have made predictions with the random forest method, which achieves better performance than […]

06.course-xgboost【XGBoost】

In this tutorial, you will learn how to build and optimize models with gradient boosting. This method dominates many Kaggle competitions and achieves state-of-the-art results on a variety of datasets. 在本教程中,您将学习如何使用梯度提升构建和优化模型。 该方法在许多 Kaggle 竞赛中占据主导地位,并在各种数据集上取得了最好的结果。 Introduction 介绍 For much of this course, you have made predictions with the random forest method, which achieves better performance than […]

05.exercise-cross-validation【练习:交叉验证】

This notebook is an exercise in the Intermediate Machine Learning course. You can reference the tutorial at this link. In this exercise, you will leverage what you’ve learned to tune a machine learning model with cross-validation. 在本练习中,您将利用所学知识通过交叉验证调整机器学习模型。 Setup 设置 The questions below will give you feedback on your work. Run the following cell to set […]

05.course-cross-validation【交叉验证】

In this tutorial, you will learn how to use cross-validation for better measures of model performance. 在本教程中,您将学习如何使用交叉验证来更好地衡量模型性能。 Introduction 介绍 Machine learning is an iterative process. 机器学习是一个迭代过程。 You will face choices about what predictive variables to use, what types of models to use, what arguments to supply to those models, etc. So far, you have made […]

04.exercise-pipelines【练习:管道】

This notebook is an exercise in the Intermediate Machine Learning course. You can reference the tutorial at this link. In this exercise, you will use pipelines to improve the efficiency of your machine learning code. 在本练习中,您将使用管道来提高机器学习代码的效率。 Setup 设置 The questions below will give you feedback on your work. Run the following cell to set up […]

04.course-pipelines【管道】

In this tutorial, you will learn how to use pipelines to clean up your modeling code. 在本教程中,您将学习如何使用管道来清理建模代码。 Introduction 介绍 Pipelines are a simple way to keep your data preprocessing and modeling code organized. Specifically, a pipeline bundles preprocessing and modeling steps so you can use the whole bundle as if it were a single step. […]

Scroll to top