Utilitarians are often criticized for being cold-hearted in that they assign numbers to suffering and happiness (or number of lives saved) and for making ethical decisions based on calculations. With regard to this, Russell and Norvig write in the third edition of their famous artificial intelligence textbook (section 16.3.1):
Although nobody feels comfortable with putting a value on a human life, it is a fact that tradeoffs are made all the time. Aircraft are given a complete overhaul at intervals determined by trips and miles flown, rather than after every trip. Cars are manufactured in a way that trades off costs against accident survival rates. Paradoxically, a refusal toput a monetary value on lifemeans that life is often undervalued. Ross Shachter relates an experience with a government agency that commissioned a study on removing asbestos from schools. The decision analysts performing the study assumed a particular dollar value for the life of a school-age child, and argued that the rational choice under that assumption was to remove the asbestos. The agency, morally outraged at the idea of setting the value of a life, rejected the report out of hand. It then decide against asbestos removal — implicitly asserting a lower value for the life of a child than that assigned by the analysts.
So, if one actually cares about a life not being destroyed, then the optimal approach is to assign a value to a life that is as accurate as possible. Deliberately not doing that makes sense only if you care more about something else and don’t mind assigning lower values implicitly.