We can already ask the robot for a job.

Robots don't just steal human work, they also begin to issue jobs to humans. Participate in any event in the recruitment industry and you will find words like “machine learning”, “big data” and “predictive analysis” in the air.

The reason for using these tools in recruitment is simple. Robot recruiters can quickly screen thousands of candidates and are much more efficient than humans. They can also be more fair. Because they don't carry intentional or unintentional prejudices like humans, they recruit a group of more diverse and highly qualified employees.

This is a very attractive idea, but it is also dangerous. The neutrality of the algorithm is not inherent, but because the world they see is only "0" and "1".

First of all, any machine learning algorithm will not be better than the training data it learns. For example, the academic researcher Colin Lee sent a doctoral dissertation to the media this year. He analyzed 447,769 successful and unsuccessful job applications and established a model with an accuracy of 70% to 80%. It is predicted which candidates will be invited to the interview. The press release said the algorithm could potentially be used as a tool to avoid "human error and unconscious bias" in the process of screening large numbers of resumes.

But such a model would absorb the artificial bias in the initial recruitment decision. For example, the above study found that age factors can best predict whether the candidate will be invited to the interview, and the youngest and oldest candidates are the least likely to succeed. You may feel that this is fair because unexperienced young people do not do well, but the common practice of rejecting older candidates seems worthy of investigation, rather than being programmed and continued.

Colin acknowledges the existence of these issues and suggests that it is best to remove some attributes (such as gender, age, and ethnicity) from your resume and use them. Even then, the algorithm may still be discriminatory. In a paper published this year, two scholars, Solon Barocas and Andrew Selbst, used a case in which employers wanted to pick the most likely to stay in work for a long time. Employees in the post. If historical data shows that female employees spend much less time in their jobs than male employees (perhaps because they have children when they leave), the algorithm may use those gender-specific attributes to find disadvantages for women. result.

What is the distance between the applicant's address and the office? This may also be a good predictor of the employee's attendance and years of service in the company; however, it may also inadvertently discriminate against certain groups because different residential communities have different ethnic and age characteristics.

These phenomena raise a thorny question: Is discrimination wrong in both rational and unintentional situations? This is a vague legal field. In the United States, according to the principle of “disparate impact”, seemingly neutral employment practices that harm the “protected class” excessively are illegal, even if the employer does not intentionally discriminate. But if an employer can prove that the practice has strong business reasons, he can defend himself. If the intent of using the algorithm is simply to recruit the best candidate for the relevant position, it may be a good enough defense.

Having said that, employers who want a more diverse workforce obviously can't take it for granted that they just have to hand over the job to the computer. If this is what they want, then they have to use the data more imaginatively.

For example, instead of setting their own corporate culture as a set condition and looking for candidates who are statistically most likely to succeed in the culture, it is better to find relevant data to show the circumstances of a more diverse workforce. Will succeed.

If the only thing that machine learning learns is your past, then it will not be able to push your workforce to the future.

Eye Goggles

Lightweight, Reusable Goggles, Indirect Vented (Splash proof)
Clear Vision with a Wide Flange and Latex-free
Vents reduce fogging for extended wear.
Fits over most prescription glasses.

Screened top and bottom ports circulate air


Feature:
1.Material:The parts the wearer touches are not made of materials that can cause skin irritation
2.Structure:Smooth surface, no burr, no acute Angle and other defects that may cause eye and face discomfort.
It has good air permeability.
Adjustable parts and structural parts are easy to adjust and replace.
3.Package:The products are properly packaged and are accompanied by product certificates and instructions
4.fixing band : The minimum width of the headband in contact with the wearer is 11.5mm. The headband is adjustable, soft and durable
5.The appearance quality of the lens: the surface of the lens is smooth and free from scratches, ripples, bubbles, impurities, and other obvious defects that may impair vision
6.Diopter: 0.04D
7.The difference between the prism degrees of the left and right eye lenses: 0.12
8.Visible light transmission ratio: colorless transparent lens 89.35
9.Impact resistance: qualified
10.Protection performance of chemical fog drops: there is no color spot on the test paper within the center of the lens
11.Irritant gas protection performance: there is no color spot on the test paper within the center of the lens

Eye Goggle,Safety Goggles Price,Eye Surgery Goggles,Medical Safety Goggles

Guangzhou HangDeng Tech Co. Ltd , https://www.hangdengtech.com