kgaitanis That is definitely a pattern, but I'd defend this particular problem as not at all fuzzy or debatable.
The problem with the argument is that it uses the percent of experienced applicants hired as evidence of selection for inexperience. But we don't know anything about the pool of applicants! It would be like criticizing a company for discrimination based on their employee demographics without knowing the demographics of the surrounding community. What if the 30 experienced weavers who were hired were literally the only experienced people who applied, and all the other 1,970 applicants were inexperienced? Then we'd have a 100% acceptance rate for experienced vs. about 6% for inexperienced. In that case, it wouldn't look at all like employers preferred inexperienced candidates. On the other hand, if most of the applicants were experienced, but only 20% of those hired were experienced, then the acceptance rate would indeed be lower than expected, and we'd be inclined to agree with the original conclusion.
Therefor, the basic assumption here is "More than 20% of applicants were experienced." If that's true, the author has at least some point. If 20% or fewer of the applicants were experienced, then there's no sign of preference for inexperienced candidates.
So why is A right? In order for >20% of applicants, or > 400 of the pool mentioned, to be experienced, more than 370 of the rejected candidates would have to have been experienced. In order for THAT to be true, SOME (i.e. > 0) of the candidates would need to have been experienced. That's how necessary assumptions work--they describe something that is NEEDED (but not necessarily sufficient) for the argument. If you negate them, then the argument fails. If we negate A, then NONE of the rejected candidates were experienced. That conforms to the first case I mentioned, in which ALL of the experienced candidates were hired. In that case, the argument would make no sense at all.