This forum is in permanent archive mode. Our new active community can be found here.

Down's Syndrome, Society, and Abortion

13»

Comments

  • edited October 2009
    Goddamn it!
    Were you going for a Godwin, or were you actually wanting to have a discussion about Eugenics. I recently read a fascinating book about programs in Central and South America. I think Nazi's ruined Eugenics debates in the same way they ruined swastikas, charlie chaplin's moustache, and anti-semitism.
    Yeah, I was going for Eugenics debate, that is why I made a comparison to Russia, (on a related note, Geneticists finally took a step for the awesome )
    Post edited by ElJoe0 on
  • edited October 2009
    I don't mean "standardized test" in the usual sense - it wouldn't even have to be entirely on paper. However, some kind of test would be very handy since it would pick out at least some people incapable of making decisions. Anyone who fails would be put under review before they were forced to have a guardian.

    However, I disagree that the test would be inherently biased. Very difficult to implement - yes, but not impossible. It would probably require individual instructors, more like a driving test.
    Most schools, doctors, parents, psychiatrists, educators, etc. essentially provide this "test" rather early in life through various methods. If someone is mentally handicapped it is already quantifiable.
    The reason for the inescapability of bias is:
    1) The nature of standardized testing that exists today is biased (usually unintentionally) owing to content and mode.
    2) A standard of "basic reasoning" must be set and it is nearly impossible to create this standard objectively.
    Post edited by Kate Monster on
  • edited October 2009
    1) The nature of standardized testing that exists today is biased (usually unintentionally) owing to content and mode.
    That is a statement of how it is now, not how it could be. I guess we're going to end up back at Scott's prior discussion of morality and ideals vs practicality and legality if we take it much further, though.
    2) A standard of "basic reasoning" must be set and it is nearly impossible to create this standard objectively.
    It might not be possible to draw the line, but there's a pretty good chance we could come up with tests that would catch a number, perhaps a large number, of people who would definitely fall below the line if that line could be drawn.
    Post edited by lackofcheese on
  • The false-positive rate tells you how likely it is for a person without the disease to be diagnosed as having the disease. This probability is not the one we're concerned with - we want to know how likely a person screened as having the disease is to have the disease, given the other information we know about the person. The first one is independent of the person, and always has a value of 5%. On the other hand, the second is heavily dependent on the person - for example, it is 0% for someone who is known not to have the disease, and 100% for someone who is known to have the disease.
    You're wrong. That's not how screening assays work. The false-positive rate refers to the test itself. If you test someone and obtain a positive result, it is very likely to be a true positive. The age of that person may indicate the likelihood of obtaining a positive result, but it in NO WAY affects the validity of a positive result.

    In other words, the probability of obtaining a given result with a screening assay does not indicate the probability that any given result obtained with the screening assay is valid or not.
  • edited October 2009
    If that's the case, screening assays are using a different definition of the term "false positive rate" than the rest of the world. Wikipedia - false positive rate
    The false positive rate is the proportion of negative instances that were erroneously reported as being positive.
    I find it more likely that you are wrong.
    Post edited by lackofcheese on
  • edited October 2009
    I find it more likely that you are wrong.
    I'm a diagnostic microbiologist. I use methodologies like this all the time. The false-positive rate of most diagnostic assays actually refers to specificity and sensitivity. Yes, they are using a different terminology, as is the case with all things scientific.

    The problem is true when your assay has a detection threshold of some sort. Let's say that the normal results for tests A, B, and C for a non-Down's Baby are all 1000. Your assay has a threshold for detection that's set at 1050 for all assays. Positive results wherein all values are sitting right at the minimum threshold are more likely to be false positives in most cases.
    Post edited by TheWhaleShark on
  • edited October 2009
    The definition you linked is equivalent to the one I stated before. Unfortunately, you're still wrong, but it's even more worrying now.
    The age of that person may indicate the likelihood of obtaining a positive result, but it in NO WAY affects the validity of a positive result.
    It very much does. Bayes' rule is extremely difficult to get through your head. I don't blame you for it, but you're still wrong. In the meantime I recommend you watch the video I linked above, which I will repost for convenience:

    Let's say we take 100 fetuses for which we know with 100% certainty that they do not have Down's Syndrome, and put them through the test (unbekownst to anyone else). If the specificity is 95%, 5 of them will be diagnosed with Down's Syndrome. Are you saying that each of these babies must have a 95% chance of having Down's syndrome? That's a contradiction!
    Post edited by lackofcheese on
  • If that's the case, screening assays are using a different definition of the term "false positive rate" than the rest of the world.Wikipedia - false positive rate
    The false positive rate is the proportion of negative instances that were erroneously reported as being positive.
    I find it more likely that you are wrong.There seems to be some failed communication here, as you guys are arguing without really contradicting each other.

    The page that Cheese just linked to does not indicate that Pete is wrong about false positive rates. The variables for that equation refer to the behavior of the test. "Negative instances" does not refer to people who never got the test and turned out to be negatives. It only refers to people who got the test and got negative results. HOWEVER, that hasn't proved the argument that the false positive rate is not the statistic that we are looking for to be wrong.

    Cheese says we want to know what percent of people with a positive result are actually positive. The false positive rate is not calculated by [# false positives/total positives], according to that link, so it does not help us answer this question. You cannot simply say there is a 5% false positive rate, so 95% of people with positive results are actually positive. That would be an incorrect interpretation of the rate. You need a true positive rate...[# true positives/total positives]. This cannot be calculate by subtracting the false positive rate ... [# false positives/total negatives]...from 100%.


    If I am misreading either of you, please enlighten me.
  • edited October 2009
    Nuri is correct on all counts.
    Post edited by lackofcheese on
  • edited October 2009
    Cheese says we want to know what percent of people with a positive result are actually positive.
    Ahhhhh. OK. I misunderstood Cheese's use of "false positive rate" in this sense.

    In any event, if you want to know the true positive rate, your only option is to use a gold standard method. In the case of Down's, that would be CVS or amniocentesis. In any clinical laboratory or regulatory laboratory of any kind, you always go back to a gold standard method before making your final diagnosis. So, if you want to find the true positive rate, you have to perform the more invasive techniques on all positive screening results, regardless of the age of the mother.
    Let's say we take 100 fetuses for which weknowwith 100% certainty that they do not have Down's Syndrome, and put them through the test (unbekownst to anyone else). If the specificity is 95%, 5 of them will be diagnosed with Down's Syndrome. Are you saying that each of these babies must have a 95% chance of having Down's syndrome? That's a contradiction!
    No, I'm not saying that, and I didn't mean to imply that. A positive test result is not necessarily a true positive. Further tests are necessary to demonstrate a true positive.
    The definition you linked is equivalent to the one I stated before.
    Well, no it's not. Specificity refers only to the behavior of the test itself. For any diagnostic assay, the specificity/sensitivity are determined by controlled experiments prior to use, and verified during use.

    Let me ask you something about Baye's rules: Would it be fair to say that the less prevalent the target of a given test, the greater the likelihood that a positive result is actually a false positive? That's how I'm interpreting it.

    EDIT: Ah fuck it. Wall of text time. I'm pretty sure I'm interpreting the theorem correctly, and that's why I was so confused from the beginning. You're oversimplifying the testing for Down's Syndrome, causing you to apply the theorem broadly, and that's causing you to make a hidden assumption.

    The preliminary screening tools that you mentioned (the 95% detection rate and 5% false-positive rate) are not a "Down's Syndrome Y/N" test. Rather, they each measure the levels of certain things and compare those against a baseline range. Values that lie above that baseline range (I'm assuming, but the specific direction is unimportant) are considered "positive," meaning that further testing is necessary for a complete diagnosis. The values obtained from multiple different tests provide a body of evidence, from which we draw a conclusion called a diagnosis.

    Baye's theorem applies to the detected values in each individual test independent of other tests. If we arbitrarily call the baseline 0, then all values above that line are considered positive, but the closer the value is to baseline, the less reliable it is. The value closer to baseline is the same as a very low actual occurrence of your test target. Values much larger than baseline are a far more prevalent actual occurrence of your test target.

    This creates the concept of "weak/borderline positive" test result. Those values which are at or just above threshold are "weak positives" and are more likely to be false positives. This is the same problem that you run into when performing a binary test for a rare illness, and is an example of Baye's theorem holding true.

    Your hidden assumption, in order to make your argument work, is that those pregnancies that are very unlikely to result in a Down's Syndrome child would necessarily have "weak positive" test results. In order to say that positive screen results obtained in younger mothers are more likely to be false positives, you'd have to know the actual values detected for each test for each such case, and demonstrate that those values are close to baseline for each test. That is not necessarily the case. That's what I meant when I said that the rarity of young mothers having Down's babies didn't affect the validity of the screening test result; the screening test is not a binary test for Down's Syndrome. "Positive" and "Negative" are interpretations of data intended to guide investigation.

    In the case of a screening test, only the prevalence of the tested variable influences the reliability of the result, and it only applies to that test. You have to address each test specifically, but your argument is lumping all testing together into "Down's Syndrome: Y/N," and you're erroneously applying Baye's theorem to your abstracted conglomerate test. Further, you're not actually considering the specific nature of each test; your generalizations actually turn them into different tests entirely.
    Post edited by TheWhaleShark on
  • edited October 2009
    Let me ask you something about Baye's rules: Would it be fair to say that the less prevalent the target of a given test, the greater the likelihood that a positive result is actually a false positive? That's how I'm interpreting it.
    Yes.
    The preliminary screening tools that you mentioned (the 95% detection rate and 5% false-positive rate) are not a "Down's Syndrome Y/N" test. Rather, they each measure the levels of certain things and compare those against a baseline range. Values that lie above that baseline range (I'm assuming, but the specific direction is unimportant) are considered "positive," meaning that further testing is necessary for a complete diagnosis. The values obtained from multiple different tests provide a body of evidence, from which we draw a conclusion called a diagnosis.
    Though it is more complex that way, Bayes' theorem applies to continuous data just as it does to discrete data.

    Baye's theorem appliesto the detected values in each individual test independent of other tests. If we arbitrarily call the baseline 0, then all values above that line are considered positive, but the closer the value is to baseline, the less reliable it is. The value closer to baseline is the same as a very low actual occurrence of your test target. Values much larger than baseline are a far more prevalent actual occurrence of your test target.
    If we are referring to an individual value for an individual person, the term "probable" is more appropriate than "prevalent".
    Your hidden assumption, in order to make your argument work, is that those pregnancies that are very unlikely to result in a Down's Syndrome child would necessarily have "weak positive" test results. In order to say that positive screen results obtained in younger mothers are more likely to be false positives, you'd have to know the actual values detected for each test for each such case, and demonstrate that those values are close to baseline for each test. That is not necessarily the case. That's what I meant when I said that the rarity of young mothers having Down's babies didn't affect the validity of the screening test result; the screening testis nota binary test for Down's Syndrome. "Positive" and "Negative" are interpretations of data intended to guide investigation.
    Not quite. Whether or not it is discrete or continuous is not the relevant question. I have realised that I did make at least one hidden assumption, though - that the false positive and detection rates are independent of age. This assumption is corrected by the data quoted in my next post.
    In the case of a screening test, only the prevalence of thetested variableinfluences the reliability of the result, and it only applies to that test.
    What do you mean by the "tested variable"? Presence/absence/probability of Down's syndrome? This quantity is not independent of age!
    You have to address each test specifically, but your argument is lumping all testing together into "Down's Syndrome: Y/N," and you're erroneously applying Baye's theorem to your abstracted conglomerate test. Further, you're not actually considering the specific nature of each test; your generalizations actually turn them into different tests entirely.
    My error is generalizing from a lack of data, yes, but not quite in the way you're saying. I'm not sure what you mean by "each test", but, it is perfectly acceptable to statistically agglomerate everything into "Down's Syndrome: Y/N" as long as you also give a probability for this answer, which can be calculated with Bayes' rule, as long as you have the conditional distribution of false positive rate given age. Though false positive rate is not independent of age, it is not entirely dependent on age either, and this is why age factors in to the actual probability of a true positive given a tested positive.
    Post edited by lackofcheese on
  • edited October 2009
    Without further ado, data from here:
    RESULTS: At 15 years of age the detection rate was 77% at a 1.9% false positive rate, 84% at a 4% false positive rate at age 30, rising to 100% at a 67% false positive rate at age 49. The probability of Down's Syndrome once identified with an increased risk was 1:34 at 15 years, 1:29 at 30 years and 1:6 at 49 years. CONCLUSIONS: As with second trimester biochemical screening, the detection rate and false positive rate vary considerably with age. However, detection rates across all ages are significantly higher than with second trimester screening. The risk of a positive screening result being a Down's pregnancy is considerably greater than with second trimester screening with an average probability of 1:29, compared with 1:55 in the second trimester. This information may be useful in counselling women with an increased risk result in first trimester screening.
    So, yes, I erroneously assumed that false positive rate does not vary with age. However, note that
    The probability of Down's Syndrome once identified with an increased risk was 1:34 at 15 years, 1:29 at 30 years and 1:6 at 49 years.
    This is exactly what I've been saying all along! Under the same test circumstances, older women are more likely to have Down's syndrome fetuses. As you yourself said,
    In the case of a screening test, only the prevalence of thetested variableinfluences the reliability of the result
    Since older women have greater a priori prevalence of Down's syndrome in unborn children, and this prevalence is not entirely covariant with the age-specific test behaviour, a positive result for an older woman is more likely to mean Down's syndrome. This is why it is important to calculate a patient-specific risk, based not only on the tests but on other data available on the patient.

    To consider only the test result and discard all other data we have of the patient is plainly naïve.
    Post edited by lackofcheese on
  • To consider only the test result and discard all other data we have of the patient is plainly naïve.
    And no one was suggesting that we do this.
  • edited October 2009
    To consider only the test result and discard all other data we have of the patient is plainly naïve.
    Wait, what? Everything I've said was in response to:
    Is it always reasonable to do an invasive test, even if the non-invasive test gives a negative result?
    And my entire point is that 1)negative results are reliable and 2)all positive results mandate further testing, which contrasts your statement:
    Only sponsoring invasive tests for positive screening results is reasonable, but a probabilistic threshold makes more sense.
    I'm saying that irrespective of the supposed rate of false-positive screening test results based on the age of the mother, you still need to do additional testing in order to confirm those results. It may affect your confidence in obtaining a particular result, but you still have to do follow-up testing.

    I still say that your abstraction of the testing is leading you to an erroneous conclusion. The over-generalization is causing you to effectively create a different test than what exists in reality. There are several tests that have to be performed in order to diagnose Down's, and each one has a different target than any other test.
    Under the same test circumstances, older women are more likely to have Down's syndrome fetuses
    Which we knew already. The higher false positive rate of screening tests in older women is likely due to general ovarian degradation associated with age. That causes lots of problems in general, so you'll get a lot of "blips" on any test you run. The high false-positive rate also indicates a very sensitive test, as is the case with any screening test for any application.

    The result of any one test itself is insufficient. The study you linked is talking about the probability of the final "Down's Syndrome: Y/N" question given a single positive screening test result, based on age. The thing is, it's rare for a young mother to have a Down's Baby in the first place. From this link you already posted, the probability of any 15-19 year old woman having a live Down's birth is 1/1250. With a positive screening test result, that probability becomes 1/34. You're only looking at the absolute risk, but I'm saying that the relative risk is what matters here. There is a massive increase in the likelihood of a young woman having a Down's baby given a positive screening test result. The increase in likelihood is much smaller in older women.

    My understanding of your assertion is that a positive initial screening test result for a young mother is more likely to be a false-positive, since the condition is rarer in young women. As such, younger mothers should have to pay for follow-up testing, whereas older mothers should not. Is this correct?
    Presence/absence/probability of Down's syndrome?
    No. That entire section was very specifically written to tell you that no test is a test for Down's Syndrome. You've abstracted all the tests performed into that, and it's causing an error in judgment.

    Let me get specific. We agreed:
    the less prevalent the target of a given test, the greater the likelihood that a positive result is actually a false positive
    That is Baye's theorem. If you look at the screening tests performed for Down's, they all test for different things.

    Let's look at the Quad screen. It tests the serum levels of 4 proteins: AFP, estriol, hCG, and DIA. There are normal levels for each of those measured values. The test will measure the levels of those proteins in the patient. Baye's theorem can be applied there. If a given measurement is out of range but very close to the normal range, it's more likely to be a false-positive result. That is where Baye's theorem applies to the false-positive rate of the test. The likelihood of the final true positive result doesn't matter in this test.

    Think of it like this: the target of the Quad screen is the disparity between baseline and tested value. The greater the disparity, the greater the accuracy of the test. The smaller the disparity, the lower the accuracy of the test. I can draw the same parallel looking at fluorescence values from an ELFA assay designed to detect specific bacteria.

    My assertion is that you're applying Baye's improperly because you're not looking at the actual test.
    Post edited by TheWhaleShark on
  • You guys have really took all the fun out of this otherwise interesting thread....
  • edited October 2009
    EDIT: Never mind what was here before. It's redundant. I believe you want the positive predictive value. The PPV of initial testing like this is going to be extremely low. However, the NPV of most such tests is very large (>99% usually), so these screens are most useful for ruling out a potential disease. Additional testing is needed before a positive diagnosis is made in all cases.
    Post edited by TheWhaleShark on
  • To consider only the test result and discard all other data we have of the patient is plainly naïve.
    Wait, what? Everything I've said was in response to:
    Is it always reasonable to do an invasive test, even if the non-invasive test gives a negative result?
    And my entire point is that 1)negative results are reliable and 2)allpositive results mandate further testing, which contrasts your statement:

    Only sponsoring invasive tests for positive screening results is reasonable, but a probabilistic threshold makes more sense.
    I'm saying that irrespective of the supposed rate of false-positive screening test results based on the age of the mother, you still need to do additional testing in order to confirm those results. It may affect your confidence in obtaining a particular result, but you still have to do follow-up testing.These statements are not contradictory with what I have said; you've misinterpreted my main point:
    My understanding ofyourassertion is that a positive initial screening test result for a young mother is more likely to be a false-positive, since the condition is rarer in young women. As such, younger mothers should have to pay for follow-up testing, whereas older mothers should not. Is this correct?
    Not at all! My assertion is that we should pay for invasive testing only for those at a sufficiently high level of risk. Let's go back to the statement of mine you're probably basing this on:
    This might mean that, say, you only get the test for free if you're above a certain age and/or you have a positive screening result.
    Though this was merely an example you seem to have taken to be my main point, note that I said and/or!

    Indeed, I agree that all positive results mandate further testing, in this specific case, because as we have seen before a positive result means at least a 1/30 chance of Down's syndrome. Also, though I need additional data, a negative result will probably mean at most a 1/200 or so chance of Down's syndrome. Consequently, for this specific case, I agree with mandating testing only for positive test results but only because a positive test result means a sufficiently high risk level, and a negative test result means a sufficiently low risk level. If the test was less reliable, however, then it would be justified to have either further testing of higher-risk negative results, or discard positive results which nonetheless correspond to a relatively low risk.
    The thing is, it's rare for a young mother to have a Down's Baby in the first place. Fromthis link you already posted, the probability of any 15-19 year old woman having a live Down's birth is 1/1250. With a positive screening test result, that probability becomes 1/34. You're only looking at theabsoluterisk, but I'm saying that therelativerisk is what matters here. There is a massiveincrease in the likelihoodof a young woman having a Down's baby given a positive screening test result. The increase in likelihood is much smaller in older women.
    That's nonsensical. Of course the absolute risk is what matters - we want to detect Down's syndrome, so what we care about is how probable it is for any given test subject to have a Down's syndrome fetus. In this case, given a positive test result, the risk is already ~1/30 of Down's syndrome even for young mothers. This is high enough to warrant invasive testing. It is also probably the case that any mother with a negative test result has a risk of at most 1/200 or so of Down's syndrome. This is sufficiently low to warrant not doing invasive testing. In this case, the presence or absence of a positive test result is a reasonable threshold to base further testing on, as I had already said.

    For example, if the invasive test was less dangerous and expensive, it would be good to lower the threshold. In this case, we could justify paying for invasive testing on some mothers who have had negative screening results, but only some - these would probably be older mothers.
    Let me get specific. We agreed:
    the less prevalent the target of a given test, the greater the likelihood that a positive result is actually a false positive. That is Baye's theorem.
    Sorry, perhaps it was somewhat misleading to agree to this. This is a statement of the consequences of a valid application of Bayes' theorem, but the theorem itself is much more general. It is a statement on the conditional probability of events, given other conditional and marginal probabilities. Among other things, Bayes' theorem provides a way for us to aggregate all of the data, including test results and age, on a patient into an overall, patient-specific risk of Down's syndrome. My point is that after a positive or negative result, we calculate using all the data we have a patient-specific risk of a Down's syndrome child. If this risk is above a certain threshold, invasive testing should be sponsored. I would need further data, but I completely agree that this threshold risk should be less than 1/30, since this is 3 times as likely as a miscarriage due to the invasive test, and the previous data said that a 15-year-old mother with a positive Down's syndrome result had a 1/30 chance of Down's syndrome.

    However, if our threshold risk was 1%, and a mother with a negative test result had a 2% chance of Down's syndrome, we should also sponsor invasive testing in this case as well.
  • My point is that after a positive or negative result, we calculate using all the data we have a patient-specific risk of a Down's syndrome child. If this risk is above a certain threshold, invasive testing should be sponsored. I would need further data, but I completely agree that this threshold risk should be less than 1/30, since this is 3 times as likely as a miscarriage due to the invasive test, and the previous data said that a 15-year-old mother with a positive Down's syndrome result had a 1/30 chance of Down's syndrome.
    Ahhhhhhhhhhhhhhhhhh. That's different than what I thought you were arguing. My apologies for the confusion.

    When I speak of relative risk, I'm talking about patient-specific risk as well. It goes back to the point I made earlier: a normal 15 year old girl has a 1/1250 shot of having a Down's baby, but initial positive screening tests bump that to about a 1/30 chance. I was saying that that young girl's relative risk was astronomically high and warranted the invasive testing to confirm, even though her overall risk is still relatively low compared to older women.

    I thought you were arguing that the invasive testing should not be performed on those with the lowest probability of Down's after the initial positive screen. Clearly you are not, and we both agree that following up testing is absolutely necessary in the case of a positive screening test. I also didn't see this statement the first time I read your post:
    and/or you have a positive screening result.
    Did you edit that back in at some point? That pretty much makes every argument I've put forth moot.

    As for negative results, these particular diagnostic screens have a very high level of sensitivity. The negative predictive value is somewhere in the neighborhood of 99.98%, I believe, so a negative result is as reliable as these tests get. If a woman gets a negative result on the initial (non-invasive) screening test, she should not simply be entitled to a follow-up with more invasive tests.

    Generally speaking, if any initial screening test is positive, I would say that patient automatically moves into a "high-risk" group for follow-up testing. That's how screening tests are used.
    This is a statement of the consequences of a valid application of Bayes' theorem, but the theorem itself is much more general.
    Well, yes, but I still think that the more general abstracted view can paint a somewhat misleading picture. That's why I was applying the theorem very specifically to very specific parts of one particular test. In the micro lab, we make use of diagnostic molecular testing all the time, and just as is the case with any clinical test, a "positive" screen test always warrants follow-up testing, regardless of the chances of it being a true positive.

    What I've found is that, as I said, the closer a measured value is to the "negative" range, the greater the likelihood of that positive being a false positive. We've tracked this phenomenon and found that it occurs totally independent of the risk rating of any given sample type. That's what I was saying about the "prevalence of test target" business.

    I'm all about risk-based testing. It's what we do. There is a slight problem, though, in that certain applications of risk-based testing cause you to miss developing problems or unknown problems in select groups because they've not yet been identified as "high-risk" in that area. Particularly in food safety, the idea is to be proactive in testing and find problems that we've not yet detected. If we just focus all of our efforts on the known world, we'll get blind-sided.

    So, anyhow, we should definitely pay for follow-up testing when an initial screening test for Down's is positive.

    When that initial screening test is negative, I see no reason to pay for the follow-up testing. The initial screening tests are over-conservative in their detection of Down's, so a negative result is as reliable as it gets. If an older woman wants additional testing, she (or her insurance) needs to pay for it.
  • This story kind of reminds me of one of the chapters in "Freaknomics." Apparently, ever since the Roe v. Wade decision the crime rate has been steadily decreasing, supposedly because it decreases the amount of births in inner city areas. I just thought it was pretty interesting.
  • Apparently, ever since the Roe v. Wade decision the crime rate has been steadily decreasing, supposedly because it decreases the amount of births in inner city areas. I just thought it was pretty interesting.
    That's a bit misleading -- it's more because it decreases the amount of births to mothers who don't want or can't support kids, which tends to make kids more likely to engage in antisocial behavior.
  • Apparently, ever since the Roe v. Wade decision the crime rate has been steadily decreasing, supposedly because it decreases the amount of births in inner city areas. I just thought it was pretty interesting.
    That's a bit misleading -- it's more because it decreases the amount of births to mothers who don't want or can't support kids, which tends to make kids more likely to engage in antisocial behavior.
    He cover's that in the book, but it is interesting to see the net effect from Roe v. Wade was to reduce crime.
  • He cover's that in the book,
    Yeah, I read the book -- I meant that progSHELL's description of it was misleading.
  • Apparently, ever since the Roe v. Wade decision the crime rate has been steadily decreasing, supposedly because it decreases the amount of births in inner city areas. I just thought it was pretty interesting.
    That's a bit misleading -- it's more because it decreases the amount of births to mothers who don't want or can't support kids, which tends to make kids more likely to engage in antisocial behavior.
    You're right, it was a little misleading. Thanks for correcting me.
  • Judge Posner dropping bombs.

    What makes no sense is to abridge the constitutional right to abortion on the basis of spurious contentions regarding women’s health — and the abridgement challenged in this case would actually endanger women’s health.

Sign In or Register to comment.