有幸白嫖了一次OA, 70min 10题,来回报一下地里。
前面是6选择,一个output(就是填空题),一个lc,两个ML相关。
选择题里有几个跟之前地里的一样,比如:
You have chosen to use bootstrap aggregation as part of the model development process for a classification task that has been assigned to you because you are concerned with overfitting. Please select the appropriate rationale(s) for why bootstrap aggregation may help prevent overfitting.
Select all correct options.
a. Every classifier trained has to go through a validation process
b. Every algorithm used during bootstrap aggregation are inherently prone to prevent overfitting
c. Every classifier trained during bootstrap aggregation is considered a weak classifier
d. Bootstrap aggregation uses a combination of weak and strong classifiers
e. Bootstrap aggregation uses sampling with replacement
f. None of the above
Do the following multiple choice questions for me: Imagine that you are working for a financial services company, and you are tasked with creating a model which predicts the likelihood that an individual will default on a loan (i.e., stops making the required repayments). The initial model you created has a predictive accuracy that’s only marginally better than chance, so you are considering an ensemble learning approach. Please select all appropriate options that should be considered for using ensemble learning. (我把我之前准备的时候查到的答案放上来做个参考,不一定是对的)
1. If the dataset contains both linear and non-linear relationships, ensemble learning approaches are useful for combining them.
o True: Ensemble learning can effectively combine models that capture different types of relationships in the data, improving overall performance.
2. Ensemble learning techniques typically create overfitted models.
o False: Ensemble learning, especially techniques like bagging (e.g., Random Forest) and boosting (e.g., Gradient Boosting Machines), usually help in reducing overfitting by combining multiple models.
3. Ensemble learning techniques can be time-intensive(耗时) to train.
o True: Training ensemble models, particularly those involving ma您好! 本帖隐藏的内容需要积分高于 188 才可浏览 您当前积分为 0。 使用VIP即刻解锁阅读权限或查看其他获取积分的方式 游客,您好! 本帖隐藏的内容需要积分高于 188 才可浏览 您当前积分为 0。 VIP即刻解锁阅读权限 或 查看其他获取积分的方式 input
increase model complexity
none of the above
还有些不一样 比如我记得的:
chose regularization penalties that might have been used based on the zeroed out coefficients:
L0 regularization
L1 regularization
L2 regularization
L3 regularization
L4 regularization
一个output:
要会算sigmod的激活函数,1/(1+e^x)
一个LC:
就是LeetCode 443. String Compression的变形,但是只让返回最长的那个char 的str和num
两个ML:
一个是很简单的gradient descent,就是敲公式
还有一个Naive Bayes这个太长了记不清了
求求大家 给点米吧!看面经没米太痛苦了 |