Jump to content

27 posts in this topic

Recommended Posts

Posted
2 minutes ago, IDWAF said:

Considering that there has been at least one major shift between liberals and conservatives in the past, it’s not really that hard to see some of the stuff in this study being true.

While looking for a link to the study article another news article I skimmed through mentioned something about post-9/11 and how some studies showed that overall conservative viewpoints increased in the immediate aftermath. I don't know many details (i forget which article I read it in) but it seems reasonable, at least at face value and would also somewhat support the idea that feelings of safety can help promote socially liberal ideology, but feelings of being "at risk" can push people towards feeling more conservative.

 

As someone else said here in this thread, it is literally part of the meaning of "conservative". You're less willing to accept risk. If you don't feel safe, it would make sense you would not want to accept as much risk. If you are "invincible", you can be a risk taker. 

Filed: IR-1/CR-1 Visa Country: Ukraine
Timeline
Posted
6 minutes ago, bcking said:

Just rereading your post I think we should be clear on one thing -

 

There was nothing "predetermined" about the outcomes. They had a hypothesis, that is normal in science. They were testing that hypothesis.

"Desire" is a tricky term here - As I said they had a hypothesis, and they were testing the hypothesis. Did they want the hypothesis to be true? Probably but that isn't a bad thing, on its own. Most scientists care strongly about the topic they study, because they devote a large portion of their career to it. If they have a hypothesis that will help further their study, they will want that hypothesis to be true because of their interest in furthering the subject, and also for more "selfish" reasons (many scientist's careers depend on grand funding and support, which only continues to flow if you have results that are meaningful and warrant further study). That on it's own is just human nature, and it isn't automatically a bad thing. They just need to create methodology that ensures that their bias (desiring an outcome) doesn't influence the results. 

 

With that in mind a couple of other things that the writers should have mentioned would be:

 

- Who conducted the phone interviews? The investigators or othersw? Were they prerecorded? Just like with any survey, the way things are asked can impact the results so they should attempt to minimize that risk in some way. Many political surveys these days use precorded messages so that the inflection, tone and way the questions are read is consistent among all conversations. At the very least have volunteers who don't know the purpose of the study conduct the phone interviews, so their bias doesn't potentially influence how they speak and how they act on the phone

- They should have included a power analysis in their methodology, though some would say it isn't as important since their result is statistically significant. Some of their results weren't significant though, so it would be important to know if they even had the power required to find statistically significant differences based on their number of participants. It doesn't impact the finding that is significant (even if you don't have the power expected to find a significant result, a significant result is still significant. It just means your chances of finding a significant result, if it existed, was low), assuming it truly is significant (see my issue with not reporting their statistical test). But for example perhaps the differences between responses for democrats is also significant, they just don't have the power to detect it. That would change their overall interpretation of the results.

 

 

10 minutes ago, bcking said:

Just rereading your post I think we should be clear on one thing -

 

There was nothing "predetermined" about the outcomes. They had a hypothesis, that is normal in science. They were testing that hypothesis.

"Desire" is a tricky term here - As I said they had a hypothesis, and they were testing the hypothesis. Did they want the hypothesis to be true? Probably but that isn't a bad thing, on its own. Most scientists care strongly about the topic they study, because they devote a large portion of their career to it. If they have a hypothesis that will help further their study, they will want that hypothesis to be true because of their interest in furthering the subject, and also for more "selfish" reasons (many scientist's careers depend on grand funding and support, which only continues to flow if you have results that are meaningful and warrant further study). That on it's own is just human nature, and it isn't automatically a bad thing. They just need to create methodology that ensures that their bias (desiring an outcome) doesn't influence the results. 

 

With that in mind a couple of other things that the writers should have mentioned would be:

 

- Who conducted the phone interviews? The investigators or othersw? Were they prerecorded? Just like with any survey, the way things are asked can impact the results so they should attempt to minimize that risk in some way. Many political surveys these days use precorded messages so that the inflection, tone and way the questions are read is consistent among all conversations. At the very least have volunteers who don't know the purpose of the study conduct the phone interviews, so their bias doesn't potentially influence how they speak and how they act on the phone

- They should have included a power analysis in their methodology, though some would say it isn't as important since their result is statistically significant. Some of their results weren't significant though, so it would be important to know if they even had the power required to find statistically significant differences based on their number of participants. It doesn't impact the finding that is significant (even if you don't have the power expected to find a significant result, a significant result is still significant. It just means your chances of finding a significant result, if it existed, was low), assuming it truly is significant (see my issue with not reporting their statistical test). But for example perhaps the differences between responses for democrats is also significant, they just don't have the power to detect it. That would change their overall interpretation of the results.

 

I find it hard to believe that any scientists were involved in this. Political hacks would be my guess. 

Posted (edited)
13 minutes ago, eieio said:

 

I find it hard to believe that any scientists were involved in this. Political hacks would be my guess. 

That is harsh language coming from someone who, I'm assuming, is not a scientist themselves.

 

- The paper is studying a topic that has been actively researched in the past. They aren't just coming up with a study out of thin air. Their hypothesis is based on prior work by not just the PI but also researchers. I provided a link above to a review from 2003 ( I think?) on the topic.

- The paper has a hypothesis, and developed methodology that can test the hypothesis. Is the methodology perfect? No. Is it the best way to test the hypothesis? Not necessarily, but you always have to weigh practicality with level of evidence.

 

 

The paper has flaws, and I pointed out several of them (though I'm sure my list wasn't exhaustive). That doesn't mean the people who conducted the study aren't scientists, or that the study isn't a scientific paper. We've had this discussion before here I recall (Around 2 months ago?). You seem to be quick to judge the people and make assumptions about something not being "science" just because you question the methodology. It's become a trend now. I don't think it's appropriate personally.

 

EDIT:

 

If you look at the PI's body of literature (The PI is John Bargh), he isn't someone who has focused his career on a political topic. He doesn't publish result after result looking at differences between conservatives and democrats. That isn't his primary research interest. He studies priming, and the impact that external influences can have and typically considered "volitional choices". This study fits in with that while having a political bent, which clearly makes some people uncomfortable. 

Edited by bcking
Filed: Citizen (apr) Country: Russia
Timeline
Posted
2 hours ago, bcking said:

While looking for a link to the study article another news article I skimmed through mentioned something about post-9/11 and how some studies showed that overall conservative viewpoints increased in the immediate aftermath. I don't know many details (i forget which article I read it in) but it seems reasonable, at least at face value and would also somewhat support the idea that feelings of safety can help promote socially liberal ideology, but feelings of being "at risk" can push people towards feeling more conservative.

 

As someone else said here in this thread, it is literally part of the meaning of "conservative". You're less willing to accept risk. If you don't feel safe, it would make sense you would not want to accept as much risk. If you are "invincible", you can be a risk taker. 

I think there is a difference between risks and they cannot be combined all together to forward a narrative.  For instance, fiscally, I tend to be a conservative, but socially, I tend more toward the liberal side.  Additionally, you have personal risk, and I know I never cringed at going into a burning building to either perform a rescue, or put a fire out.  My point is that there are many categories of risk which makes this study somewhat pointless.

Visa Received : 2014-04-04 (K1 - see timeline for details)

US Entry : 2014-09-12

POE: Detroit

Marriage : 2014-09-27

I-765 Approved: 2015-01-09

I-485 Interview: 2015-03-11

I-485 Approved: 2015-03-13

Green Card Received: 2015-03-24 Yeah!!!

I-751 ROC Submitted: 2016-12-20

I-751 NOA Received:  2016-12-29

I-751 Biometrics Appt.:  2017-01-26

I-751 Interview:  2018-04-10

I-751 Approved:  2018-05-04

N400 Filed:  2018-01-13

N400 Biometrics:  2018-02-22

N400 Interview:  2018-04-10

N400 Approved:  2018-04-10

Oath Ceremony:  2018-06-11 - DONE!!!!!!!

Posted (edited)
22 minutes ago, Bill & Katya said:

I think there is a difference between risks and they cannot be combined all together to forward a narrative.  For instance, fiscally, I tend to be a conservative, but socially, I tend more toward the liberal side.  Additionally, you have personal risk, and I know I never cringed at going into a burning building to either perform a rescue, or put a fire out.  My point is that there are many categories of risk which makes this study somewhat pointless.

The study isn't combining different types of risk, it isn't really even dealing with risk. The study is, similar to prior studies that it seems Dr. Bargh has conducted, looking at whether subconscious involuntary "feelings" can influence how we make decisions that we tend to believe we have control over. Specifically a subconscious "feeling" of "safety" and how that impacts how someone self-ranks themselves on a scale.

 

If you ask me on a scale of 1 to 10 how socially conservative I am, I will give you a number. Where does that number come from? From an "opinion of myself". How do I decide through self-reflection how to score myself? What factors impact my decision?  The study would suggest that immediately preceding subconscious feelings can influence how we self-reflect and self-identify. 

 

What I think would have been also interesting is if they had a third group that wasn't exposed to either story, but just asked the political questions. Or an even stronger study design could have been a "cross over" style with a reasonable wash out period. You ask the same people the same questions 6 months later (or after whatever period you deem necessary to "wash out" their original responses) but expose them to the different story (or no story at all). You'd have to balance the wash out period with the risk of opinions/viewpoints changing.

 

The issues involving risk/safety in the study aren't as strong of a weakness as just the inherent weakness with self-reporting scales. I don't know the literature on intra-rater reliability of self-reflective scoring scales. In other words - If you ask someone to put themselves on a "scale" randomly at 10 different time points will they always pick the same number. I'm sure a great many things impact what "number" they would self-assign. It may change from hour to hour, or day to day. Of course it's tricky because their memory of their original assignment would push them to choose the same number (since we would all inherently want to remain internally consistent) even if at different time points they genuinely would choose a different number.

 

EDIT:

 

The other problem with the study is relating this single time point event to general political ideologies that people hold over long periods of time. A single moment of subconscious feelings of "safety" may influence how you rate yourself immediately following that moment. That doesn't mean that your general social viewpoints are based on a more generalized "feeling of safety (or risk)" that is pervasive in your every day life. That's why I found the line in the article about "Turning republicans into democrats, until now" or whatever to be comical. It was used to gain attention, which is unfortunate.

Edited by bcking
Filed: Citizen (pnd) Country: Ireland
Timeline
Posted
1 hour ago, bcking said:

Apparently this study is not the first to look at this. It is an entire area of research in psychology. Here is a review from 2003 on the topic:

 

http://faculty.virginia.edu/haidtlab/jost.glaser.political-conservatism-as-motivated-social-cog.pdf

 

People have also done somewhat similar studies in reverse, showing that traditionally "liberal" thinkers can be pushed towards conservatism when the perceive their safety to be more at risk. 

 

Right away though the Wapo article gets pretty ridiculous. 

 

"But no one had ever turned conservatives into liberals. Until we did." - Dun dun duhhhhh!!! Oh the drama

 

As for the actual paper (which I found access to, but it isn't public access so I can't share it sorry) -

 

On the methods -

 - Participants were "recruited" to the study (a chance to win a gift certificate!), so right away you have selection bias. It's not clear what they told people about the study but in general you always have to ask whether there are differences between the population you are looking at, and the population as a whole. They were mostly white (75%), mostly female (66%) and the average age was 35. That's all the demographic information they give us.

- They don't state outright that participants were "randomized" to hear one of the two stories (flying vs. invulnerability). I'm assuming they were and I think this is just an error of omission on their part, but it should be clear how they were put into each group. This is a great example of a little thing that annoys me.

- They originally had 158 participants, but 13 wouldn't fully cooperate (some refused to answer the questions regarding partisanship, and 5 more didn't respond for dependent measures). You could think of that as an 8% "loss to follow-up". We don't know how those participants would have impacted the results. Though 8% isn't really an objectively bad amount (In these sorts of studies you expect to lose some people. I don't do these kinds of studies though so I don't know what is normal. My gut would say less than 10% is okay).

- 31% of participants were Republican, so 45 participants. There were no differences between the two "story groups", so we can assume then that around 22 Republicans/Conservatives were in the invulnerability group. Not a large population.

- They don't mention in their methods section what statistical tools they used to analyze their results. In their results they report p values but I don't know what statistical test they are using so I don't know if they are using the right one. I wouldn't have let that fly if I were one of the reviewers for publication.

 

As for the results -

- The main one that they are touting is that the "socially conservative" scale (1 to 9) for the Republicans who heard the invulnerable story was significant less than those that heard the flying story (mean of 5.09 vs. 6.48, p = 0.034). Both groups had a standard deviation of around 2-3 so there was a wide range of responses most likely. For something like a scale I would have linked to see a median and IQR as well, because in order to compare means like they have the results should be normally distributed. If responses aren't normally distributed then a mean doesn't adequately reflect the population (the mean and median will be significantly different from each other).

- Democrats "social conservative" scale didn't differ in the two groups (3.57 vs 3.77)

- No differences in the economic conservative scale for either groups

 

My biggest issue is my first bullet point in the results. In a study like this you should be clear what statistical tool you want to use, and many of them assume a normal distribution. They should show that it can apply to their data set. If it doesn't, they need to use another tool to compare them (the details of which would be appropriate are a bit beyond me, I would rely on a statistician). Many studies don't have statisticians involved, which is a shame. Many doctors think they can just do it themselves, and so it's amazing how many bad studies come out as a result.

I love your analysis. In my defense I posted the article not as definitive proof one way or the other. Just a starting point for discussion. 

Oct 19, 2010 I-130 application submitted to US Embassy Seoul, South Korea

Oct 22, 2010 I-130 application approved

Oct 22, 2010 packet 3 received via email

Nov 15, 2010 DS-230 part 1 faxed to US Embassy Seoul

Nov 15, 2010 Appointment for visa interview made on-line

Nov 16, 2010 Confirmation of appointment received via email

Dec 13, 2010 Interview date

Dec 15, 2010 CR-1 received via courier

Mar 29, 2011 POE Detroit Michigan

Feb 15, 2012 Change of address via telephone

Jan 10, 2013 I-751 packet mailed to Vermont Service CenterJan 15, 2013 NOA1

Jan 31, 2013 Biometrics appointment letter received

Feb 20, 2013 Biometric appointment date

June 14, 2013 RFE

June 24, 2013 Responded to RFE

July 24, 2013 Removal of conditions approved

Posted
13 minutes ago, Póg mo said:

I love your analysis. In my defense I posted the article not as definitive proof one way or the other. Just a starting point for discussion. 

I think the fact that the study is looking at political opinions detracts from its more interesting implications. The political slant to the study is going to automatically make people uncomfortable, defensive, and quick to either praise it or hate it. Politics is so polarizing, that people have a tendency to then make anything that even remotely discusses politics into a polarizing issue (You either love/hate, agree/disagree, no shades of grey).

 

To me the same study could apply to many other scenarios and be equally interesting. It's ultimately about the relationship between what we believe to be our "conscious decisions" or our "free will" and the things that subconsciously influence us. 

 

I doubt the people participating consciously thought "Well since I just heard that story where I was invincible, I still think I feel quite invincible, therefore if I were to ask myself if I were socially conservative, I think I can be a risk taker because in that story I was invincible so therefore I will score myself more liberal than I normally would because right now I feel like I can take risks". They subconsciously had feelings of safety by imaging a fictional scenario, and their responses to self-reflection questions were (potentially) different as a result of those scenarios. I think that is fascinating, if it's true (the limitations I've discussed obviously still apply).

 

One thing it seems like this PI has an issue with is reproducibility. People have tried to replicate his studies in the past with great difficulty. Part of that comes with the territory (psychoanalysis and psychological studies are likely much harder to reproduce than far more objective studies where you are counting cell lines or something). Part of it probably is at least some level of bias influencing the results of the studies. He could do well if he blinded aspects of his studies, particularly the analysis/data collection (Had someone do the analysis or data collection without knowledge of the hypothesis or exact study design).

 

Filed: Citizen (apr) Country: Ecuador
Timeline
Posted

Always need to know how many rats are in the control group...

06-04-2007 = TSC stamps postal return-receipt for I-129f.

06-11-2007 = NOA1 date (unknown to me).

07-20-2007 = Phoned Immigration Officer; got WAC#; where's NOA1?

09-25-2007 = Touch (first-ever).

09-28-2007 = NOA1, 23 days after their 45-day promise to send it (grrrr).

10-20 & 11-14-2007 = Phoned ImmOffs; "still pending."

12-11-2007 = 180 days; file is "between workstations, may be early Jan."; touches 12/11 & 12/12.

12-18-2007 = Call; file is with Division 9 ofcr. (bckgrnd check); e-prompt to shake it; touch.

12-19-2007 = NOA2 by e-mail & web, dated 12-18-07 (187 days; 201 per VJ); in mail 12/24/07.

01-09-2008 = File from USCIS to NVC, 1-4-08; NVC creates file, 1/15/08; to consulate 1/16/08.

01-23-2008 = Consulate gets file; outdated Packet 4 mailed to fiancee 1/27/08; rec'd 3/3/08.

04-29-2008 = Fiancee's 4-min. consular interview, 8:30 a.m.; much evidence brought but not allowed to be presented (consul: "More proof! Second interview! Bring your fiance!").

05-05-2008 = Infuriating $12 call to non-English-speaking consulate appointment-setter.

05-06-2008 = Better $12 call to English-speaker; "joint" interview date 6/30/08 (my selection).

06-30-2008 = Stokes Interrogations w/Ecuadorian (not USC); "wait 2 weeks; we'll mail her."

07-2008 = Daily calls to DOS: "currently processing"; 8/05 = Phoned consulate, got Section Chief; wrote him.

08-07-08 = E-mail from consulate, promising to issue visa "as soon as we get her passport" (on 8/12, per DHL).

08-27-08 = Phoned consulate (they "couldn't find" our file); visa DHL'd 8/28; in hand 9/1; through POE on 10/9 with NO hassles(!).

Filed: IR-1/CR-1 Visa Country: Israel
Timeline
Posted

They call this a study :lol:

09/14/2012: Sent I-130
10/04/2012: NOA1 Received
12/11/2012: NOA2 Received
12/18/2012: NVC Received Case
01/08/2013: Received Case Number/IIN; DS-3032/I-864 Bill
01/08/2013: DS-3032 Sent
01/18/2013: DS-3032 Accepted; Received IV Bill
01/23/2013: Paid I-864 Bill; Paid IV Bill
02/05/2013: IV Package Sent
02/18/2013: AOS Package Sent
03/22/2013: Case complete
05/06/2013: Interview Scheduled

06/05/2013: Visa issued!

06/28/2013: VISA RECEIVED

07/09/2013: POE - EWR. Went super fast and easy. 5 minutes of waiting and then just a signature and finger print.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

05/06/2016: One month late - overnighted form N-400.

06/01/2016: Original Biometrics appointment, had to reschedule due to being away.

07/01/2016: Biometrics Completed.

08/17/2016: Interview scheduled & approved.

09/16/2016: Scheduled oath ceremony.

09/16/2016: THE END - 4 year long process all done!

 

 

 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
- Back to Top -

Important Disclaimer: Please read carefully the Visajourney.com Terms of Service. If you do not agree to the Terms of Service you should not access or view any page (including this page) on VisaJourney.com. Answers and comments provided on Visajourney.com Forums are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Visajourney.com does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. VisaJourney.com does not condone immigration fraud in any way, shape or manner. VisaJourney.com recommends that if any member or user knows directly of someone involved in fraudulent or illegal activity, that they report such activity directly to the Department of Homeland Security, Immigration and Customs Enforcement. You can contact ICE via email at Immigration.Reply@dhs.gov or you can telephone ICE at 1-866-347-2423. All reported threads/posts containing reference to immigration fraud or illegal activities will be removed from this board. If you feel that you have found inappropriate content, please let us know by contacting us here with a url link to that content. Thank you.
×
×
  • Create New...