Interpreting Hazard Ratio: Can we say "percent reduction ...

hazard ratio interpretation percentage

hazard ratio interpretation percentage - win

All nitpicks, criticism, refutations, and discussion of new study ‘low carb increases mortality’

You know the one.
This study: KetoScience Link
The Lancet30135-X/fulltext)

Dietary carbohydrate intake and mortality: a prospective cohort study and meta-analysis

Open AccessPublished:August 16, 2018
DOI:https://doi.org/10.1016/S2468-2667(18)30135-X30135-X)
Open access funded by National Institutes of Health

Summary

Background

Low carbohydrate diets, which restrict carbohydrate in favour of increased protein or fat intake, or both, are a popular weight-loss strategy. However, the long-term effect of carbohydrate restriction on mortality is controversial and could depend on whether dietary carbohydrate is replaced by plant-based or animal-based fat and protein. We aimed to investigate the association between carbohydrate intake and mortality.

Methods

We studied 15 428 adults aged 45–64 years, in four US communities, who completed a dietary questionnaire at enrolment in the Atherosclerosis Risk in Communities (ARIC) study (between 1987 and 1989), and who did not report extreme caloric intake (<600 kcal or >4200 kcal per day for men and <500 kcal or >3600 kcal per day for women). The primary outcome was all-cause mortality. We investigated the association between the percentage of energy from carbohydrate intake and all-cause mortality, accounting for possible non-linear relationships in this cohort. We further examined this association, combining ARIC data with data for carbohydrate intake reported from seven multinational prospective studies in a meta-analysis. Finally, we assessed whether the substitution of animal or plant sources of fat and protein for carbohydrate affected mortality.

Findings

During a median follow-up of 25 years there were 6283 deaths in the ARIC cohort, and there were 40 181 deaths across all cohort studies. In the ARIC cohort, after multivariable adjustment, there was a U-shaped association between the percentage of energy consumed from carbohydrate (mean 48·9%, SD 9·4) and mortality: a percentage of 50–55% energy from carbohydrate was associated with the lowest risk of mortality. In the meta-analysis of all cohorts (432 179 participants), both low carbohydrate consumption (<40%) and **high carbohydrate consumption (>70%) conferred greater mortality risk than did moderate intake, which was consistent with a U-shaped association (pooled hazard ratio **1·20, 95% CI 1·09–1·32 for low carbohydrate consumption; 1·23, 1·11–1·36 for high carbohydrate consumption). However, results varied by the source of macronutrients: mortality increased when carbohydrates were exchanged for animal-derived fat or protein (1·18, 1·08–1·29) and mortality decreased when the substitutions were plant-based (0·82, 0·78–0·87).

Interpretation

Both high and low percentages of carbohydrate diets were associated with increased mortality, with minimal risk observed at 50–55% carbohydrate intake. Low carbohydrate dietary patterns favouring animal-derived protein and fat sources, from sources such as lamb, beef, pork, and chicken, were associated with higher mortality, whereas those that favoured plant-derived protein and fat intake, from sources such as vegetables, nuts, peanut butter, and whole-grain breads, were associated with lower mortality, suggesting that the source of food notably modifies the association between carbohydrate intake and mortality.

Funding

National Institutes of Health.
Dr Sara Seidelmann, clinical and research fellow in cardiovascular medicine from Brigham and Women's Hospital in Boston, who led the research, said: "Low-carb diets that replace carbohydrates with protein or fat are gaining widespread popularity as a health and weight-loss strategy.
"However, our data suggests that animal-based low carbohydrate diets, which are prevalent in North America and Europe, might be associated with shorter overall life span and should be discouraged.
"Instead, if one chooses to follow a low carbohydrate diet, then exchanging carbohydrates for more plant-based fats and proteins might actually promote healthy ageing in the long term."

Reactions:

https://twitter.com/SBakerMD/status/1030471255495979009
https://twitter.com/ProfTimNoakes/status/1030375444527435776
https://twitter.com/Mangan150/status/1030487002276196352
https://twitter.com/CampbellMurdoch/status/1030488888534548481
https://twitter.com/ColinChampMD/status/1030489170924453888
https://twitter.com/FatEmperostatus/1030460135976710145
https://twitter.com/GrassBased/status/1030435088951996416
https://www.reddit.com/science/comments/980oxn/very_lowcarb_diet_could_shorten_life_expectancy/
https://www.bbc.com/news/health-45195474
https://cluelessdoctors.com/2018/08/17/when-bad-science-can-harm-you/
https://www.reddit.com/KetoNews/comments/9ft9t7/the_latest_attack_on_lowcarb_diets_science_o Nina Teicholz
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)32252-3/abstract
http://asianwithoutrice.com/low-carb-that-kills-part-1-of-2-mischief-public-manipulation/
http://asianwithoutrice.com/making-low-carb-a-murderer-part-2-of-2-broken-from-the-start/
https://isupportgary.com/articles/fakenews-headlines-low-carb-diets-arent-dangerous
ARTICLES DISAGREEING WITH SEIDELMANN PAPER:
http://www.zoeharcombe.com/2018/08/low-carb-diets-could-shorten-life-really/ DR. ZOE HARCOMBE PhD Low carb diets could shorten life (really?!) August 23, 2018 association, carbohydrates, causation, epidemiology, Harvard, meta-analysis, relative risk
https://www.wsj.com/articles/carbs-good-for-you-fat-chance-1536705397 WALL STREET JOURNAL Carbs, Good for You? Fat Chance! Dietary dogma’s defenders continue to mislead the public and put Americans’ health at risk. By Nina TeicholzSept. 11, 2018 6:36 p.m. ET
https://anhinternational.org/2018/08/22/scientific-attack-on-low-carb-diets/ ANH (ALLIANCE FOR NATURAL HEALTH) INTERNATIONAL Scientific attack on low carb diets: Why the healthy low carb community shouldn't be swayed by the latest Lancet Public Health study 22 August 2018 Robert Verkerk PhD, scientific and executive director, ANH-Intl
https://anhinternational.org/2018/08/29/the-collapsing-edifice-of-nutritional-science/ ANH (ALLIANCE FOR NATURAL HEALTH) INTERNATIONAL The collapsing edifice of nutritional science: Could reserach be made to work in the interests of the public rather than corporations following the latest scientific attack... 29 August 2018 Robert Verkerk PhD, scientific and executive director, ANH-Intl
https://cluelessdoctors.com/2018/08/17/when-bad-science-can-harm-you/ When Bad Science Can Harm You Angela A. Stanton, PhD August 17, 2018
https://cluelessdoctors.com/2018/08/25/the-ripple-effect-of-bad-science/ The Ripple-Effect of Bad Science Angela A. Stanton, PhD August 25, 2018
https://www.linkedin.com/pulse/low-carbs-mortality-john-schoonbee LINKED IN Low carbs and mortality John Schoonbee, PhD: Global Chief Medical Officer at Swiss Re Published on August 20, 2018
https://www.docmuscles.com/will-a-low-carbohydrate-diet-kill-you/ DOC MUSCLES Will A Low-Carbohydrate Diet Kill You? Adam S. Nally, D.O. AUGUST 20, 2018
https://blog.bulletproof.com/low-carb-diet-study/ BULLETPROOF BLOG New Study Links Low-Carb Diet to Earlier Death: Here’s What It Gets Wrong By: DAVE ASPREY August 21, 2018
https://www.youtube.com/watch?v=Ce6eHcUOc4s YOUTUBE Do low-carb diets lead to early death? (The ARIC/Lancet Study Explored) Ken D Berry MD Published on Aug 19, 2018 59,732 views
https://vancouversun.com/opinion/op-ed/david-harper-keto-diet-a-healthy-alternative-to-the-standard-western-diet VANCOUVER SUN David Harper: Keto diet a healthy alternative to the standard Western diet Updated: August 23, 2018 A study published in The Lancet that concluded the ketogenic diet is associated with shorter lifespans did not consider ketogenic diets at all, but was a meta-study that incorporated decades-old research on low carb diets that did not put participants into a state of nutritional ketosis, says David Harper
https://www.psychologytoday.com/us/blog/diagnosis-diet/201809/latest-low-carb-study-all-politics-no-science Psychology Today Dr. Georgia Ede
submitted by dem0n0cracy to ketoscience [link] [comments]

Study: Low-carb diets associated with lower life expectancy

Something of interest keto/low-carb -lent users. u/ketolent, ketosoy
A recent study from the Lancet Public Health30135-X/fulltext#%20) suggests that low-carbohydrate diets are associated with decreased life expectancy. From the paper:
Methods
We studied 15 428 adults aged 45–64 years, in four US communities...The primary outcome was all-cause mortality. We investigated the association between the percentage of energy from carbohydrate intake and all-cause mortality, accounting for possible non-linear relationships in this cohort.
Findings
[T]here was a U-shaped association between the percentage of energy consumed from carbohydrate (mean 48·9%, SD 9·4) and mortality: a percentage of 50–55% energy from carbohydrate was associated with the lowest risk of mortality. In the meta-analysis of all cohorts (432 179 participants), both low carbohydrate consumption (<40%) and high carbohydrate consumption (>70%) conferred greater mortality risk than did moderate intake, which was consistent with a U-shaped association (pooled hazard ratio 1·20, 95% CI 1·09–1·32 for low carbohydrate consumption; 1·23, 1·11–1·36 for high carbohydrate consumptio**n). However, results varied by the source of macronutrients: mortality increased when carbohydrates were exchanged for animal-derived fat or protein (1·18, 1·08–1·29) and mortality decreased when the substitutions were plant-based (0·82, 0·78–0·**87).
Interpretation
Both high and low percentages of carbohydrate diets were associated with increased mortality, with minimal risk observed at 50–55% carbohydrate intake. Low carbohydrate dietary patterns favouring animal-derived protein and fat sources, from sources such as lamb, beef, pork, and chicken, were associated with higher mortality, whereas those that favoured plant-derived protein and fat intake, from sources such as vegetables, nuts, peanut butter, and whole-grain breads, were associated with lower mortality, suggesting that the source of food notably modifies the association between carbohydrate intake and mortality.
https://www.thelancet.com/journals/lanpub/article/PIIS2468-2667(18)30135-X/fulltext#%2030135-X/fulltext#%20)
submitted by Basil_love to soylent [link] [comments]

CI interval misinterpretation by researcher?

“Wow! Over 6 times greater risk of dying [Hazards Ratio? What is that?] for patients left to routine care. But the confidence interval What is that? is quite broad. Meaning? We have on average a 95% probability of a true hazard ratio between 1.49 and 29.35. This reflects the uncertainty in generalizing from so few deaths. It would take only a recoding of a few teens in either the adult buddy or control group to erase these findings. It might only take a lengthening or shortening of the 11 to 14 year follow up period.”
One example of paper, which discusses this misinterpretation (see pg 4, number 4 listed item).
Also from paper:
>Such an interval does not, however, directly indicate a property of the parameter; instead, it indicates a property of the procedure, as is typical for a frequentist technique. Specifically, we may find that a particular procedure, when used repeatedly across a series of hypothetical data sets (i.e., the sample space), yields intervals that contain the true parameter value in 95 % of the cases. When such a procedure is applied to a particular data set, the resulting interval is said to be a 95 % CI. The key point is that the CIs do not provide for a statement about the parameter as it relates to the particular sample at hand; instead, they provide for a statement about the performance of the procedure of drawing such intervals in repeated use. Hence, it is incorrect to interpret a CI as the probability that the true value is within the interval (eg Berger and Wolpert, 1988). As is the case with p-values, CIs do not allow one to make probability statements about parameters or hypotheses.
It is that one would theoretically have 95% of the CI’s containing the pop parameter estimation, even though they all don’t necessarily overlap and do not have the same bounds.
In fact, it has been estimated that a unbiased 95% CI would have a capture percentage of around 83.4% but I’m sure this answer is inexact for a variety of reasons (Cumming and Maillardet, 2006).
So I just wanted to share this here to make sure that what I’m saying is correct and that I myself am not making a mistake either. I just can’t seem to get why the mistake is being made especially by someone who has consulted on projects involving improving scientific research.
submitted by slimuser98 to AskStatistics [link] [comments]

Diabetes and Obesity Journals- Lupine Publishers

Diabetes and Obesity Journals- Lupine Publishers

Lupine Publishers- Archives of Diabetes & Obesity (ADO)- Associated Risk Factors in Pre-diabetes and Type 2 Diabetes in Saudi Community

Abstract

Background and Objective: The prevalence and incidence of type 2 diabetes mellitus (T2DM) are increasing worldwide. Pre diabetes is a high-risk state for the development of diabetes and its associated complications. This study aims to determine the associated risk factors among T2DM and pre diabetes patients among adult Saudi population.
Methods: For the present study, we analyzed participants who are older than 20 years old and had undergone a blood test to assess HbA1c. A total of 1095 were selected to be enrolled for the present study. All patients were from the population of the Primary health and Diabetic Centres at King Fahad Armed Forces Hospital. Participants were defined as having T2DM according to self-report, clinical reports, use of anti diabetic agents and HbA1c (≥6.5). Non T2DM participants were divided into normoglycemic or pre diabetic group as follows: HbA1c < 5.7, (normoglycemic) or HbA1c 5.7-6.4 (pre diabetes). Laboratory assessments included HbA1c, lipids, creatinine and urinary micro albumin.
Main results: Of the 1095 participants analyzed, 796 were women (72.7%). Age was 45.1±11.1 and BMI was 30.7±5.7. Hypertension had been diagnosed in 415 (38.2%) participants. Blood measurements revealed the following values: creatinine 68.2±22.0umol/L , Urine micro albumin (g/min) 55.4±200.3, total cholesterol levels 4.9±1.0mmol/L, high density lipoprotein 1.3±0.3mmol/L, triglyceride levels 1.5±0.7 and low density lipoprotein 3.0±0.9mmol/L. Of the overall 1095 analyzed participants, pre diabetes was present in 362(33.1%), 368(33.6%) were classified as T2DM and 365 (33.3%) as normoglycemic. When comparing pre diabetic with normoglycemic and T2DM population, pre diabetic subjects were more likely to have hypertension and higher triglyceride than normoglycemic but less than T2DM subjects. In addition, pre diabetic patients compared with T2DM ones had higher levels of low density lipoprotein and high density lipoprotein. Logistic regression analysis showed no significant association of any of the co variables with normoglycemic subjects in front of the pre diabetic reference group, whereas the odds of being in the diabetic group gets multiplied by 7.56 for each unitary increase in male gender (p< 0.0001, OR: 7.56, 95% CI 3.16-18.23). Also, individuals with hypertension had higher odds of being in the DM group than in the prediabetic (p<0 .0001, OR: 6.06, 95% CI 3.25- 11.28). Age of subjects had lower odds of being in the DM group than in the pre diabetic (p<0 .0001, OR: 0.85, 95% CI (0.82-0.89).
Conclusion: This study found the major clinical differences between pre diabetic and T2DM Patients were the higher hypertension and hypertriglyceridenia in the T2DM patients. Clearly, despite the small sample size, this study has posed important public health issues that require immediate attention from the health authority. Unless immediate steps are taken to contain the increasing prevalence of obesity, diabetes, pre diabetes, the health care costs for chronic diseases will pose an enormous financial burden to the country
Keywords: Type 2 Diabetes; Pre diabetes; Risk factors
Abbreviations: T2DM: Type 2 Diabetes Mellitus; IFG: Impaired Fasting Glucose; BMI: Body Mass Index; HTN: Hypertension; AER: Albumin Excretion Rate; DN: Diabetic Nephropathy; OR: Odds Ratio; CI: Confidence Interval; I-IFG: Isolated Impaired Fasting Glucose

Introduction

Diabetes mellitus is a major cause of excess mortality and morbidity. The prevalence and incidence of type 2 diabetes mellitus (T2DM) are increasing worldwide [1]. T2DM patients have a higher risk of developing microvascular and macrovascular disease than the general population. The occurrence of these complications depends largely on the degree of glycemic control as well as on the adequate control of cardiovascular risk factors [2-5]. In Saudi Arabia, primary epidemiological diabetes features are not different. The diabetes mellitus prevalence among adult Saudi population has reached 23.7%, a percentage being the highest across the globe [6,7]. Statistics regarding the increasing trend of diabetes and pre diabetes in the world have also been observed in Saudi Arabia. As per the WHO country profile 2016, 14.4% of Saudi population has diabetes, while prevalence in males is 14.7% [8]. In 2015, the prevalence of pre diabetics was found to be 9.0% in Jeddah with 9.4% in men, while for diabetes, it was 12.1% with 12.9% adult male population suffering from it [9]. Another study conducted in Saudi population revealed that the diabetes prevalence in their study was found to be 25.4%, while impaired fasting glucose (IFG) was 25.5%. The strongest risk factors were age > 45 years, high triglycerides levels, and hypertension [10].
Pre diabetes is a high-risk state for the development of diabetes and its associated complications [11-13].
Recent data have shown that in developed countries, such as the Unites States and the United Kingdom, more than one-third of adults have pre diabetes, but most of these individuals are unaware they have the condition [14-16]. Once detected, pre diabetes needs to be acknowledged with a treatment plan to prevent or slow the transition to diabetic [17,18]. Treatment of pre diabetes is associated with delay of the onset of diabetes [19]. Detection and treatment of pre diabetes is therefore a fundamental strategy in diabetes prevention [11].
Current recommendations for pre diabetes screening by the American Diabetes Association focus nearly exclusively on adults who are overweight or obese as defined by body mass index (BMI) until the patient meets the age-oriented screening at 45 years [11]. Further, the recently released recommendation from the US Preventive Services Task Force regarding screening for abnormal glucose levels and T2DM limits screening to individuals who are overweight or obese [20]. This focus on obese or overweight individuals, although obesity and pre diabetes have shown trends of increasing prevalence. United States Preventive Services Task Force has recommended screening of diabetes in adults devoid of precise symptoms and in individuals with BP higher than 135/80mmHg [21]. This study aims to determine the associated risk factors among T2DM and pre diabetes patients among adult Saudi population.

Methods

For the present study, we analyzed participants who are older than 20 years old and had undergone a blood test to assess HbA1c. A total of 1095 were selected to be enrolled for the present study. All patients were from the population of the Primary health and Diabetic Centers at King Fahad Armed Forces Hospital. Participants were defined as having T2DM according to self-report, clinical reports, use of anti diabetic agents and HbA1c (≥6.5) [11]. Non T2DM participants were divided into normoglycemic or pre diabetic group as follows: HbA1c<5.7, (normoglycemic) or HbA1c 5.7-6.4 (pre diabetes) \[11\]. 362 subjects were found to be pre diabetic. Almost similar number of normoglyceic and T2DM subjects was selected to be analyzed for comparison. All data were collected by personal interview and on the basis of a review of electronic medical data. Weight (kg) and height (cm) were measured by physician and nurse interviewers and recorded. Overweight and obesity were defined as BMI 25-29.9 and ≥30.0kg/m2 respectively \[22\]. Blood Pressure readings were within a gap of 15 minutes using a mercury sphygmomanometer by palpation and auscultation method in right arm in sitting position. Two readings were taken 15 min apart and the average of both the readings was taken for analysis. Hypertension (HTN) was also diagnosed based on anti HTN medications or having a prescription of antihypertensive drugs and were classified as Hypertensive irrespective of their current blood pressure reading or if the blood pressure was greater than 140/90 mmHg i.e. systolic BP more than 140 and diastolic BP more than 90 mm of Hg – Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines \[23\]. Laboratory assessments included HbA1c, lipids, creatinine and urinary micro albumin. HbA1c was expressed as percentage. High performance liquid chromatography was used. Fasting serum lipids were measured on a sample of blood after fasting for 14 hours. We used the enzymatic method for determining the cholesterol and trigylcerides levels. Diabetic nephropathy (DN) was assessed by measurement of mean albumin excretion rate (AER) on timed, overnight urine collections. We use a polyclonal radioimmunoassay for albumin measurement. DN is defined as an albumin excretion rate of >20g/min in a timed or a 24hr urine collection which is an equivalent to >30 mg/g creatinine in a random spot sample.

Statistical Analysis

Univariate analysis of demographic and clinical laboratory was accomplished using one-way analysis of variance (ANOVA) with posy hoc analysis between variables, to estimate the significance of different between groups where appropriate. Chi square (X2) test were used for categorical data comparison. The adjusted odds ratio (OR) with a 95% confidence interval (CI) was calculated. In order to evaluate the adjusted association of aforementioned factors on being normoglycemic or diabetic in relation to the pre diabetes group, a multinomial logistic regression model was fit, in which the categorical dependent variable was normoglycemia, pre diabetes or T2DM(with pre diabetes as the reference category), and significant variables in bivariate analyses were included as explanatory variables. Despite of the ordinal nature of the dependent variable, ordered logistic regression was not adjusted because the aim of the study was not the association of factors with a latent degree of diabetes but the differential profile of pre diabetes in front of normoglicemia and diabetes. As all the participants were the same age, adjusting for age was not applied. All statistical analyses were performed using SPSS Version 22.0. The difference between groups was considered significant when P<0.05.

Results

Of the 1095 participants analyzed, 796 were women (72.7%). Age was 45.1±11.1 and BMI was 30.7±5.7. Hypertension had been diagnosed in 415 (38.2%) participants. Blood measurements revealed the following values: creatinine 68.2±22.0umol/L, Urine microalbumin (g/min) 55.4±200.3, total cholesterol levels 4.9±1.0mmol/L, high density lipoprotein 1.3±0.3mmol/L, triglyceride levels 1.5±0.7 and low density lipoprotein 3.0 ±0.9mmol/L. Of the overall 1095 analyzed participants, pre diabetes was present in 362(33.1%), 368(33.6%) were classified as T2DM and 365 (33.3%) as normoglycemic. Table 1 shows the clinical characteristics and laboratory data of the three groups according to the predefined glycemic status. When comparing pre diabetic with normoglycemic and T2DM population, pre diabetic subjects were more likely to have hypertension and higher triglyceride than normoglycemic but less than T2DM subjects. In addition, prediabetic patients compared with T2DM ones had higher levels of low density lipoprotein and high density lipoprotein. In Table 2, logistic regression analysis showed no significant association of any of the covariables with normoglycemic subjects in front of the pre diabetic reference group, whereas the odds of being in the diabetic group gets multiplied by 7.56 for each unitary increase in male gender (p<0.0001, OR: 7.56, 95% CI 3.16-18.23). Also, individuals with hypertension had higher odds of being in the DM group than in the pre diabetic (p<0 .0001, OR: 6.06, 95% CI 3.25-11.28). Age of subjects had lower odds of being in the DM group than in the pre diabetic (p<0 .0001, OR: 0.85, 95% CI (0.82-0.89).

Discussion

This study showed that multiple risk factors are related to T2DM, but not to the pre diabetes group, including age, female gender and HTN. Generalization to all population could not be due to regionalized characteristics. In addition, it does not evaluate the healthcare services offered in our city. The size of our sample and the cross section type of the study should be of consideration.
T2DM is a major health concern worldwide and is increasing in parallel with the obesity epidemic [24]. Prevalence of T2DM has increased dramatically with 1 million people reported to have been diagnosed with T2DM in 1994, increasing to 382 million by 2013, and with prediction of 592 million by 2035 [25]. Given that both genetic and environmental factors contribute to T2DM progression, it has been proposed that amongst increasing globalization, Asian ethnicities including Saudi Arabia have been unable to adapt to food and lifestyle related aspects of westernized culture [26]. Hence when matched for the same gender, age, and body weight, those with Asian ethnicity appear to have a greater risk of poor metabolic health than Caucasian counterparts including Europeans people [27]. This increased risk for T2DM has been reported in both Asians and Saudi Arabia [6-10,28].
Currently, the population with pre-diabetes has reached approximately 318 million around the world, accounting for 6.7% of the total number of adults. About 69.2% of the prediabetes population lives in low or middle-income countries [29]. Understanding pre diabetes may be crucial to reducing the global T2DM epidemic and is defined either by the presence of isolated impaired fasting glucose (I-IFG); or isolated impaired glucose tolerance (I-IGT); or both IFG and IGT. To maintain glucose homeostasis greater secretion of insulin is required from the pancreatic cells, and hence hyperinsulinemia develops. Prolonged hyperinsulinemia and/or fatty pancreas may in turn lead to the dysfunction of pancreatic cells, resulting in impaired insulin secretion [30]. Decreased insulin secretion and concomitant increased blood glucose levels consequently also lead to the reduced uptake of glucose by skeletal muscle, thereby enhancing muscle insulin resistance [31]. IFG, determined from fasting plasma glucose, occurs as a result of poor glucose regulation, resulting in raised blood glucose even after an overnight fast, while IGT is due to an individual being unable to respond to glucose consumed as part of a meal, resulting in increased postprandial blood glucose [11]. More recently, prediabetes has also been identified by mildly elevated HbA1c [32,33].
The younger age of T2DM in our cohort is consistent with that seen among other groups such as the Australians, the American Indian and Alaska natives [34-36]. Age of subjects had lower odds of being in the DM group than in the pre diabetic (p<0 .0001, OR: 0.85, 95% CI (0.82-0.89) in concordance with earlier reports [37,38]. Odds of being in the diabetic group gets multiplied by 7.56 for each unitary increase in male gender (p< 0.0001, OR: 7.56, 95% CI 3.16- 18.23). As seen in this study, majority of the female participants were either overweight (59.6%) or obese (78.6%). The reason for such an observation has not been completely elucidated but is proposed to be associated with obesity which is highly prevalent in the populations worldwide. Since obesity is closely linked to increased insulin resistance and decreased insulin sensitivity and higher risk of diabetes, arresting the obesity pandemic among our population should be a priority [39-41]. Special, culturally oriented community-based intervention programs need to be implemented. The frequency of pre diabetes in 27.2% of the female cases out of the total cohort in this study was six times higher than other, estimated to be 4.2% in 2006 [42,43]. Due to our small sample size, this is inconclusive and needs to be verified by extending our study to more of our communities. Nevertheless, our findings warrant special attention from the health authorities since although HbA1c is not as sensitive as IGT test, it has consistently been shown to be a good predictor of increased risk for cardiovascular diseases and T2DM in many populations around the world [44,45].
Previous cross-sectional studies have reported that multiple risk factors are related to pre-diabetes, Such as increased age, overweight, obesity, blood pressure, and dyslipidemia [37,46,47]. More importantly, impaired glucose tolerance was found to be an independent risk factor for cardiovascular disease, the hazard ratio of death was 2.22 (95% CI = 1.08–4.58), and arterial stiffness and pathological changes in the arterial intima occurred in the stage of IGT [48]. The participants in our study with pre-diabetes had higher BMI, more frequent HTN, higher triglyceride, frequent renal failure and DN than those without pre-diabetes but lower than participants with T2DM. logistic regression analysis showed no significant association of any of the covariables with normoglycemic subjects in front of the pre diabetic reference group, whereas the odds of being in the diabetic group gets multiplied by 7.56 for each unitary increase in male gender. Also, individuals with hypertension had higher odds of being in the DM group than in the pre diabetic. Age of subjects had lower odds of being in the DM group than in the pre diabetic which was consistent with earlier studies [37,38].
Previous studies have reported that overweight and obesity were the mainly factors contributing to insulin resistance, and insulin resistance was the basis of diabetes and other chronic diseases [49,50]. In the present study, BMI was significantly higher in the pre diabetes than the normal groups, p=0.03. When BMI was classified into three types. The total numbers of overweight and obese people in the pre-diabetes and normal groups were 293 and 291, respectively (the total number were 362 and 365, respectively), and there were statistically non significant differences in being overweight or obese between the pre-diabetes and normal groups (OR = 1.02, 95% CI = 0.86–1.21, p=0.8). Increasing evidence suggests that the excess body fat in overweight/obese people might lead to increased degradation of fat, which resulted in the production of large amounts of free fatty acids (FFAs). When the level of FFAs was higher in blood, the capacity of liver tissue for insulin-mediated glucose uptake and utilization was lower, so the blood glucose level was high in circulation [51]. In other words, high FFAs in the blood were one of the important pathogenic factors of obesity caused by insulin resistance [52]. The fact that BMI categories was not a significant factor in our study is the cohort mean BMI was in the obesity range, p=0.3. However, the mean BMI was significantly different between the studied groups, p=0.03.
A high level of triglycerides was not significantly associated as a risk factor for developing pre-diabetes and T2DM (OR = 1.09, 95% CI = (0.60-2.00), P=0.8, 1.44(0.86-2.40),P=0.2) respectively. High level of triglycerides could increase the fat deposition in muscle, liver, and pancreas, and it could damage the function of mitochondria and induce oxidative stress which, in turn, could cause insulin resistance, but also lead to impaired islet B cell function [53]. Some studies suggested an interrelation between hyper triglyceridemia and insulin resistance and that they promote each other’s development [54,55]. In concordance with our result, in some epidemiological studies, for instance, the Framingham Heart Study, hyper triglyceridemia was more prevalent in type 2 diabetes mellitus patients than in the normal population, suggesting that hyper triglyceridemia is a causal factor of type 2 diabetes mellitus [56]. However, this paper was a cross-sectional study, thus it was impossible to determine the causal relationship between hyper triglyceridemia and pre-diabetes and T2DM.
Hypertension was found to be a risk factor for T2DM but not for the pre diabetes group in our study (OR = 6.06, 95% CI =3.25- 11.28, p<0.0001, OR = 0.95, 95% CI = 0.50-1.82, p=0.9) respectively. A possible mechanism is that the activity of angiotensin II is increased in the circulatory system of patient with hypertension. Angiotensin II activates renin-angiotensin-aldosterone system and affects the function of the pancreatic islets, resulting in islet fibrosis and reduced synthesis of insulin, and ultimately leading to insulin resistance [57,58]. Insulin resistance can also aggravate the condition of hypertension. Directly or indirectly through the activity of renin-angiotensin-aldosterone system, insulin promotes renal tubular to reabsorb Na+ and water, leading to the increased blood volume and cardiac output; this is considered as one of reasons for the development of hypertension [59]. Interactions between abnormal glucose tolerance, hypertension, and dyslipidemia could impair endothelial cell and result in atherosclerosis or other cardiovascular complications. Therefore, the management of daily diet of people with pre-diabetes and the monitoring of body weight, blood lipids, and blood pressure is very important.
Results of our investigation must be interpreted in light of some limitations such as the cross-sectional design, which does not let to establish any causal relation with respect to prediabetic state and only provides mere associations. Moreover, the classification of glycemic state was based on HbA1c, instead of its combination with a glucose tolerance test. Then, it is expected that the lack of glucose tolerance test data leads to a suboptimal estimation of glycemic state because normoglycemic group may include some individuals with impaired glucose tolerance that should have been included in pre diabetic group. Considering the goal population, a larger cohort would have probably provided a greater power of the statistical analyses.

Conclusion

This study found the major clinical differences between pre diabetic and T2DM patients were the higher hypertension and hyper triglyceridenia in the T2DM patients. Clearly, despite the small sample size, this study has posed important public health issues that require immediate attention from the health authority. Unless immediate steps are taken to contain the increasing prevalence of obesity, diabetes, pre diabetes, the health care costs for chronic diseases will pose an enormous financial burden to the country.

Conclusion

Use a plant based protein blend diet pea - lowers levels of hunger hormone, ghrelin. Quinoa -chock full of anti-inflammatory compounds called flavonoids. Hemp - contains 20 amino acids, healthy omega fats and fiber (including 9 the body cannot make on its own). Coconut - packed full of healthy saturated fats that go straight to the liver for a quick energy boost. Monk fruit - contains powerful antioxidants called mogrosides. Cinnamon - clinically proven to support healthy blood sugar levels AND healthy triglyceride levels. Vanilla Bean - loaded with minerals like magnesium, potassium, and calcium. Vanilla also has mood-boosting and energy enhancing effects on body. Zero alcohol use.

Acknowledgment

We are grateful to the staffs from the diabetic centre at King Fahad Armed Forces Hospital for their valuable contributions in data collection. The authors have no conflict of interest to disclose.
For more Lupine Publishers Open Access Journals Please visit our website: https://lupinepublishersgroup.com/
For more articles open access Diabetes and Obesity Journals Please Click Here:
https://lupinepublishers.com/diabetes-obesity-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
submitted by LupinePublishers to u/LupinePublishers [link] [comments]

Associated Risk Factors in Pre-diabetes and Type 2 Diabetes in Saudi Community

Associated Risk Factors in Pre-diabetes and Type 2 Diabetes in Saudi Community
Lupine Publishers| Journal of Diabetes and Obesity


Abstract

Background and Objective: The prevalence and incidence of type 2 diabetes mellitus (T2DM) are increasing worldwide. Pre diabetes is a high-risk state for the development of diabetes and its associated complications. This study aims to determine the associated risk factors among T2DM and pre diabetes patients among adult Saudi population.
Methods: For the present study, we analyzed participants who are older than 20 years old and had undergone a blood test to assess HbA1c. A total of 1095 were selected to be enrolled for the present study. All patients were from the population of the Primary health and Diabetic Centres at King Fahad Armed Forces Hospital. Participants were defined as having T2DM according to self-report, clinical reports, use of anti diabetic agents and HbA1c (≥6.5). Non T2DM participants were divided into normoglycemic or pre diabetic group as follows: HbA1c < 5.7, (normoglycemic) or HbA1c 5.7-6.4 (pre diabetes). Laboratory assessments included HbA1c, lipids, creatinine and urinary micro albumin.
Main results: Of the 1095 participants analyzed, 796 were women (72.7%). Age was 45.1±11.1 and BMI was 30.7±5.7. Hypertension had been diagnosed in 415 (38.2%) participants. Blood measurements revealed the following values: creatinine 68.2±22.0umol/L , Urine micro albumin (g/min) 55.4±200.3, total cholesterol levels 4.9±1.0mmol/L, high density lipoprotein 1.3±0.3mmol/L, triglyceride levels 1.5±0.7 and low density lipoprotein 3.0±0.9mmol/L. Of the overall 1095 analyzed participants, pre diabetes was present in 362(33.1%), 368(33.6%) were classified as T2DM and 365 (33.3%) as normoglycemic. When comparing pre diabetic with normoglycemic and T2DM population, pre diabetic subjects were more likely to have hypertension and higher triglyceride than normoglycemic but less than T2DM subjects. In addition, pre diabetic patients compared with T2DM ones had higher levels of low density lipoprotein and high density lipoprotein. Logistic regression analysis showed no significant association of any of the co variables with normoglycemic subjects in front of the pre diabetic reference group, whereas the odds of being in the diabetic group gets multiplied by 7.56 for each unitary increase in male gender (p< 0.0001, OR: 7.56, 95% CI 3.16-18.23). Also, individuals with hypertension had higher odds of being in the DM group than in the prediabetic (p<0 .0001, OR: 6.06, 95% CI 3.25- 11.28). Age of subjects had lower odds of being in the DM group than in the pre diabetic (p<0 .0001, OR: 0.85, 95% CI (0.82-0.89).
Conclusion: This study found the major clinical differences between pre diabetic and T2DM Patients were the higher hypertension and hypertriglyceridenia in the T2DM patients. Clearly, despite the small sample size, this study has posed important public health issues that require immediate attention from the health authority. Unless immediate steps are taken to contain the increasing prevalence of obesity, diabetes, pre diabetes, the health care costs for chronic diseases will pose an enormous financial burden to the country
Keywords: Type 2 Diabetes; Pre diabetes; Risk factors
Abbreviations: T2DM: Type 2 Diabetes Mellitus; IFG: Impaired Fasting Glucose; BMI: Body Mass Index; HTN: Hypertension; AER: Albumin Excretion Rate; DN: Diabetic Nephropathy; OR: Odds Ratio; CI: Confidence Interval; I-IFG: Isolated Impaired Fasting Glucose

Introduction

Diabetes mellitus is a major cause of excess mortality and morbidity. The prevalence and incidence of type 2 diabetes mellitus (T2DM) are increasing worldwide [1]. T2DM patients have a higher risk of developing microvascular and macrovascular disease than the general population. The occurrence of these complications depends largely on the degree of glycemic control as well as on the adequate control of cardiovascular risk factors [2-5]. In Saudi Arabia, primary epidemiological diabetes features are not different. The diabetes mellitus prevalence among adult Saudi population has reached 23.7%, a percentage being the highest across the globe [6,7]. Statistics regarding the increasing trend of diabetes and pre diabetes in the world have also been observed in Saudi Arabia. As per the WHO country profile 2016, 14.4% of Saudi population has diabetes, while prevalence in males is 14.7% [8]. In 2015, the prevalence of pre diabetics was found to be 9.0% in Jeddah with 9.4% in men, while for diabetes, it was 12.1% with 12.9% adult male population suffering from it [9]. Another study conducted in Saudi population revealed that the diabetes prevalence in their study was found to be 25.4%, while impaired fasting glucose (IFG) was 25.5%. The strongest risk factors were age > 45 years, high triglycerides levels, and hypertension [10].
Pre diabetes is a high-risk state for the development of diabetes and its associated complications [11-13].
Recent data have shown that in developed countries, such as the Unites States and the United Kingdom, more than one-third of adults have pre diabetes, but most of these individuals are unaware they have the condition [14-16]. Once detected, pre diabetes needs to be acknowledged with a treatment plan to prevent or slow the transition to diabetic [17,18]. Treatment of pre diabetes is associated with delay of the onset of diabetes [19]. Detection and treatment of pre diabetes is therefore a fundamental strategy in diabetes prevention [11].
Current recommendations for pre diabetes screening by the American Diabetes Association focus nearly exclusively on adults who are overweight or obese as defined by body mass index (BMI) until the patient meets the age-oriented screening at 45 years [11]. Further, the recently released recommendation from the US Preventive Services Task Force regarding screening for abnormal glucose levels and T2DM limits screening to individuals who are overweight or obese [20]. This focus on obese or overweight individuals, although obesity and pre diabetes have shown trends of increasing prevalence. United States Preventive Services Task Force has recommended screening of diabetes in adults devoid of precise symptoms and in individuals with BP higher than 135/80mmHg [21]. This study aims to determine the associated risk factors among T2DM and pre diabetes patients among adult Saudi population.

Methods

For the present study, we analyzed participants who are older than 20 years old and had undergone a blood test to assess HbA1c. A total of 1095 were selected to be enrolled for the present study. All patients were from the population of the Primary health and Diabetic Centers at King Fahad Armed Forces Hospital. Participants were defined as having T2DM according to self-report, clinical reports, use of anti diabetic agents and HbA1c (≥6.5) [11]. Non T2DM participants were divided into normoglycemic or pre diabetic group as follows: HbA1c<5.7, (normoglycemic) or HbA1c 5.7-6.4 (pre diabetes) \[11\]. 362 subjects were found to be pre diabetic. Almost similar number of normoglyceic and T2DM subjects was selected to be analyzed for comparison. All data were collected by personal interview and on the basis of a review of electronic medical data. Weight (kg) and height (cm) were measured by physician and nurse interviewers and recorded. Overweight and obesity were defined as BMI 25-29.9 and ≥30.0kg/m2 respectively \[22\]. Blood Pressure readings were within a gap of 15 minutes using a mercury sphygmomanometer by palpation and auscultation method in right arm in sitting position. Two readings were taken 15 min apart and the average of both the readings was taken for analysis. Hypertension (HTN) was also diagnosed based on anti HTN medications or having a prescription of antihypertensive drugs and were classified as Hypertensive irrespective of their current blood pressure reading or if the blood pressure was greater than 140/90 mmHg i.e. systolic BP more than 140 and diastolic BP more than 90 mm of Hg – Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines \[23\]. Laboratory assessments included HbA1c, lipids, creatinine and urinary micro albumin. HbA1c was expressed as percentage. High performance liquid chromatography was used. Fasting serum lipids were measured on a sample of blood after fasting for 14 hours. We used the enzymatic method for determining the cholesterol and trigylcerides levels. Diabetic nephropathy (DN) was assessed by measurement of mean albumin excretion rate (AER) on timed, overnight urine collections. We use a polyclonal radioimmunoassay for albumin measurement. DN is defined as an albumin excretion rate of >20g/min in a timed or a 24hr urine collection which is an equivalent to >30 mg/g creatinine in a random spot sample.

Statistical Analysis

Univariate analysis of demographic and clinical laboratory was accomplished using one-way analysis of variance (ANOVA) with posy hoc analysis between variables, to estimate the significance of different between groups where appropriate. Chi square (X2) test were used for categorical data comparison. The adjusted odds ratio (OR) with a 95% confidence interval (CI) was calculated. In order to evaluate the adjusted association of aforementioned factors on being normoglycemic or diabetic in relation to the pre diabetes group, a multinomial logistic regression model was fit, in which the categorical dependent variable was normoglycemia, pre diabetes or T2DM(with pre diabetes as the reference category), and significant variables in bivariate analyses were included as explanatory variables. Despite of the ordinal nature of the dependent variable, ordered logistic regression was not adjusted because the aim of the study was not the association of factors with a latent degree of diabetes but the differential profile of pre diabetes in front of normoglicemia and diabetes. As all the participants were the same age, adjusting for age was not applied. All statistical analyses were performed using SPSS Version 22.0. The difference between groups was considered significant when P<0.05.

Results

Of the 1095 participants analyzed, 796 were women (72.7%). Age was 45.1±11.1 and BMI was 30.7±5.7. Hypertension had been diagnosed in 415 (38.2%) participants. Blood measurements revealed the following values: creatinine 68.2±22.0umol/L, Urine microalbumin (g/min) 55.4±200.3, total cholesterol levels 4.9±1.0mmol/L, high density lipoprotein 1.3±0.3mmol/L, triglyceride levels 1.5±0.7 and low density lipoprotein 3.0 ±0.9mmol/L. Of the overall 1095 analyzed participants, pre diabetes was present in 362(33.1%), 368(33.6%) were classified as T2DM and 365 (33.3%) as normoglycemic. Table 1 shows the clinical characteristics and laboratory data of the three groups according to the predefined glycemic status. When comparing pre diabetic with normoglycemic and T2DM population, pre diabetic subjects were more likely to have hypertension and higher triglyceride than normoglycemic but less than T2DM subjects. In addition, prediabetic patients compared with T2DM ones had higher levels of low density lipoprotein and high density lipoprotein. In Table 2, logistic regression analysis showed no significant association of any of the covariables with normoglycemic subjects in front of the pre diabetic reference group, whereas the odds of being in the diabetic group gets multiplied by 7.56 for each unitary increase in male gender (p<0.0001, OR: 7.56, 95% CI 3.16-18.23). Also, individuals with hypertension had higher odds of being in the DM group than in the pre diabetic (p<0 .0001, OR: 6.06, 95% CI 3.25-11.28). Age of subjects had lower odds of being in the DM group than in the pre diabetic (p<0 .0001, OR: 0.85, 95% CI (0.82-0.89).
Discussion
This study showed that multiple risk factors are related to T2DM, but not to the pre diabetes group, including age, female gender and HTN. Generalization to all population could not be due to regionalized characteristics. In addition, it does not evaluate the healthcare services offered in our city. The size of our sample and the cross section type of the study should be of consideration.
T2DM is a major health concern worldwide and is increasing in parallel with the obesity epidemic [24]. Prevalence of T2DM has increased dramatically with 1 million people reported to have been diagnosed with T2DM in 1994, increasing to 382 million by 2013, and with prediction of 592 million by 2035 [25]. Given that both genetic and environmental factors contribute to T2DM progression, it has been proposed that amongst increasing globalization, Asian ethnicities including Saudi Arabia have been unable to adapt to food and lifestyle related aspects of westernized culture [26]. Hence when matched for the same gender, age, and body weight, those with Asian ethnicity appear to have a greater risk of poor metabolic health than Caucasian counterparts including Europeans people [27]. This increased risk for T2DM has been reported in both Asians and Saudi Arabia [6-10,28].
Currently, the population with pre-diabetes has reached approximately 318 million around the world, accounting for 6.7% of the total number of adults. About 69.2% of the prediabetes population lives in low or middle-income countries [29]. Understanding pre diabetes may be crucial to reducing the global T2DM epidemic and is defined either by the presence of isolated impaired fasting glucose (I-IFG); or isolated impaired glucose tolerance (I-IGT); or both IFG and IGT. To maintain glucose homeostasis greater secretion of insulin is required from the pancreatic cells, and hence hyperinsulinemia develops. Prolonged hyperinsulinemia and/or fatty pancreas may in turn lead to the dysfunction of pancreatic cells, resulting in impaired insulin secretion [30]. Decreased insulin secretion and concomitant increased blood glucose levels consequently also lead to the reduced uptake of glucose by skeletal muscle, thereby enhancing muscle insulin resistance [31]. IFG, determined from fasting plasma glucose, occurs as a result of poor glucose regulation, resulting in raised blood glucose even after an overnight fast, while IGT is due to an individual being unable to respond to glucose consumed as part of a meal, resulting in increased postprandial blood glucose [11]. More recently, prediabetes has also been identified by mildly elevated HbA1c [32,33].
The younger age of T2DM in our cohort is consistent with that seen among other groups such as the Australians, the American Indian and Alaska natives [34-36]. Age of subjects had lower odds of being in the DM group than in the pre diabetic (p<0 .0001, OR: 0.85, 95% CI (0.82-0.89) in concordance with earlier reports [37,38]. Odds of being in the diabetic group gets multiplied by 7.56 for each unitary increase in male gender (p< 0.0001, OR: 7.56, 95% CI 3.16- 18.23). As seen in this study, majority of the female participants were either overweight (59.6%) or obese (78.6%). The reason for such an observation has not been completely elucidated but is proposed to be associated with obesity which is highly prevalent in the populations worldwide. Since obesity is closely linked to increased insulin resistance and decreased insulin sensitivity and higher risk of diabetes, arresting the obesity pandemic among our population should be a priority [39-41]. Special, culturally oriented community-based intervention programs need to be implemented. The frequency of pre diabetes in 27.2% of the female cases out of the total cohort in this study was six times higher than other, estimated to be 4.2% in 2006 [42,43]. Due to our small sample size, this is inconclusive and needs to be verified by extending our study to more of our communities. Nevertheless, our findings warrant special attention from the health authorities since although HbA1c is not as sensitive as IGT test, it has consistently been shown to be a good predictor of increased risk for cardiovascular diseases and T2DM in many populations around the world [44,45].
Previous cross-sectional studies have reported that multiple risk factors are related to pre-diabetes, Such as increased age, overweight, obesity, blood pressure, and dyslipidemia [37,46,47]. More importantly, impaired glucose tolerance was found to be an independent risk factor for cardiovascular disease, the hazard ratio of death was 2.22 (95% CI = 1.08–4.58), and arterial stiffness and pathological changes in the arterial intima occurred in the stage of IGT [48]. The participants in our study with pre-diabetes had higher BMI, more frequent HTN, higher triglyceride, frequent renal failure and DN than those without pre-diabetes but lower than participants with T2DM. logistic regression analysis showed no significant association of any of the covariables with normoglycemic subjects in front of the pre diabetic reference group, whereas the odds of being in the diabetic group gets multiplied by 7.56 for each unitary increase in male gender. Also, individuals with hypertension had higher odds of being in the DM group than in the pre diabetic. Age of subjects had lower odds of being in the DM group than in the pre diabetic which was consistent with earlier studies [37,38].
Previous studies have reported that overweight and obesity were the mainly factors contributing to insulin resistance, and insulin resistance was the basis of diabetes and other chronic diseases [49,50]. In the present study, BMI was significantly higher in the pre diabetes than the normal groups, p=0.03. When BMI was classified into three types. The total numbers of overweight and obese people in the pre-diabetes and normal groups were 293 and 291, respectively (the total number were 362 and 365, respectively), and there were statistically non significant differences in being overweight or obese between the pre-diabetes and normal groups (OR = 1.02, 95% CI = 0.86–1.21, p=0.8). Increasing evidence suggests that the excess body fat in overweight/obese people might lead to increased degradation of fat, which resulted in the production of large amounts of free fatty acids (FFAs). When the level of FFAs was higher in blood, the capacity of liver tissue for insulin-mediated glucose uptake and utilization was lower, so the blood glucose level was high in circulation [51]. In other words, high FFAs in the blood were one of the important pathogenic factors of obesity caused by insulin resistance [52]. The fact that BMI categories was not a significant factor in our study is the cohort mean BMI was in the obesity range, p=0.3. However, the mean BMI was significantly different between the studied groups, p=0.03.
A high level of triglycerides was not significantly associated as a risk factor for developing pre-diabetes and T2DM (OR = 1.09, 95% CI = (0.60-2.00), P=0.8, 1.44(0.86-2.40),P=0.2) respectively. High level of triglycerides could increase the fat deposition in muscle, liver, and pancreas, and it could damage the function of mitochondria and induce oxidative stress which, in turn, could cause insulin resistance, but also lead to impaired islet B cell function [53]. Some studies suggested an interrelation between hyper triglyceridemia and insulin resistance and that they promote each other’s development [54,55]. In concordance with our result, in some epidemiological studies, for instance, the Framingham Heart Study, hyper triglyceridemia was more prevalent in type 2 diabetes mellitus patients than in the normal population, suggesting that hyper triglyceridemia is a causal factor of type 2 diabetes mellitus [56]. However, this paper was a cross-sectional study, thus it was impossible to determine the causal relationship between hyper triglyceridemia and pre-diabetes and T2DM.
Hypertension was found to be a risk factor for T2DM but not for the pre diabetes group in our study (OR = 6.06, 95% CI =3.25- 11.28, p<0.0001, OR = 0.95, 95% CI = 0.50-1.82, p=0.9) respectively. A possible mechanism is that the activity of angiotensin II is increased in the circulatory system of patient with hypertension. Angiotensin II activates renin-angiotensin-aldosterone system and affects the function of the pancreatic islets, resulting in islet fibrosis and reduced synthesis of insulin, and ultimately leading to insulin resistance [57,58]. Insulin resistance can also aggravate the condition of hypertension. Directly or indirectly through the activity of renin-angiotensin-aldosterone system, insulin promotes renal tubular to reabsorb Na+ and water, leading to the increased blood volume and cardiac output; this is considered as one of reasons for the development of hypertension [59]. Interactions between abnormal glucose tolerance, hypertension, and dyslipidemia could impair endothelial cell and result in atherosclerosis or other cardiovascular complications. Therefore, the management of daily diet of people with pre-diabetes and the monitoring of body weight, blood lipids, and blood pressure is very important.
Results of our investigation must be interpreted in light of some limitations such as the cross-sectional design, which does not let to establish any causal relation with respect to prediabetic state and only provides mere associations. Moreover, the classification of glycemic state was based on HbA1c, instead of its combination with a glucose tolerance test. Then, it is expected that the lack of glucose tolerance test data leads to a suboptimal estimation of glycemic state because normoglycemic group may include some individuals with impaired glucose tolerance that should have been included in pre diabetic group. Considering the goal population, a larger cohort would have probably provided a greater power of the statistical analyses.

Conclusion

This study found the major clinical differences between pre diabetic and T2DM patients were the higher hypertension and hyper triglyceridenia in the T2DM patients. Clearly, despite the small sample size, this study has posed important public health issues that require immediate attention from the health authority. Unless immediate steps are taken to contain the increasing prevalence of obesity, diabetes, pre diabetes, the health care costs for chronic diseases will pose an enormous financial burden to the country.

Conclusion

Use a plant based protein blend diet pea - lowers levels of hunger hormone, ghrelin. Quinoa -chock full of anti-inflammatory compounds called flavonoids. Hemp - contains 20 amino acids, healthy omega fats and fiber (including 9 the body cannot make on its own). Coconut - packed full of healthy saturated fats that go straight to the liver for a quick energy boost. Monk fruit - contains powerful antioxidants called mogrosides. Cinnamon - clinically proven to support healthy blood sugar levels AND healthy triglyceride levels. Vanilla Bean - loaded with minerals like magnesium, potassium, and calcium. Vanilla also has mood-boosting and energy enhancing effects on body. Zero alcohol use.
For more Lupine Publishers Open Access Journals Please visit our website: https://lupinepublishersgroup.com/
For more Open Access Journal of Diabetes and Obesity articles Please Click Here:
https://lupinepublishers.com/diabetes-obesity-journal/
To Know More About Open Access Publishers Please Click on Lupine Publishers
submitted by LupinePublishers to u/LupinePublishers [link] [comments]

Apple's EU Tax Kerfuffle - Ben Thompson

The current story in /Apple about this tax issue has lots of misinformation, which is understandable because international corporate tax is complicated but also because we're on Reddit and the tendency for Redditors on any of these stories are completely predictable and obviously skews against big corporations (and even here in /Apple, which I commented about yesterday, Apple isn't going to get the benefit of the doubt).
Anyway, I didn't want to waste my time writing up my own thoughts beyond my brief comment given this audience, but I realize that's unfair to the adults who have an open mind about this. Anyway here's Ben Thompson's daily email about this issue, and he's a better messenger anyway (the Apple haters would dismiss my argument out of hand as being an Apple fanboy, they can't do that in this case).
~~
Apple's EU Tax Problem
Apple owes Ireland a lot of money. Maybe. From the European Commission press release:
Following an in-depth state aid investigation launched in June 2014, the European Commission has concluded that two tax rulings issued by Ireland to Apple have substantially and artificially lowered the tax paid by Apple in Ireland since 1991. The rulings endorsed a way to establish the taxable profits for two Irish incorporated companies of the Apple group (Apple Sales International and Apple Operations Europe), which did not correspond to economic reality: almost all sales profits recorded by the two companies were internally attributed to a "head office". The Commission's assessment showed that these "head offices" existed only on paper and could not have generated such profits. These profits allocated to the "head offices" were not subject to tax in any country under specific provisions of the Irish tax law, which are no longer in force. As a result of the allocation method endorsed in the tax rulings, Apple only paid an effective corporate tax rate that declined from 1% in 2003 to 0.005% in 2014 on the profits of Apple Sales International.
This selective tax treatment of Apple in Ireland is illegal under EU state aid rules, because it gives Apple a significant advantage over other businesses that are subject to the same national taxation rules. The Commission can order recovery of illegal state aid for a ten-year period preceding the Commission's first request for information in 2013. Ireland must now recover the unpaid taxes in Ireland from Apple for the years 2003 to 2014 of up to €13 billion, plus interest.
In fact, the tax treatment in Ireland enabled Apple to avoid taxation on almost all profits generated by sales of Apple products in the entire EU Single Market. This is due to Apple's decision to record all sales in Ireland rather than in the countries where the products were sold. This structure is however outside the remit of EU state aid control. If other countries were to require Apple to pay more tax on profits of the two companies over the same period under their national taxation rules, this would reduce the amount to be recovered by Ireland.
As Tim Cook noted in a blistering letter posted on Apple's website, in 1980 Apple set up a factory in Cork, Ireland, and the company has, in Cook's words, "Operated continuously in Cork ever since." It wasn't all smooth sailing though. Specifically, by 1990 Apple had expanded its Cork operations to cover most of its European business in addition to its Cork factory, and the company asked for a meeting with the government to discuss its tax situation; the Financial Times has the notes:
[The tax adviser’s employee representing Apple] mentioned by way of background information that Apple was now the largest employer in the Cork area with 1,000 direct employees and 500 persons engaged on a subcontract basis. It was stated that the company is at present reviewing its worldwide operations and wishes to establish a profit margin on its Irish operations. [The tax adviser’s employee representing Apple] produced the accounts prepared for the Irish branch for the accounting period ended […] 1989 which showed a net profit of $270m on a turnover of $751m. It was submitted that no quoted Irish company produced a similar net profit ratio. In [the tax adviser’s employee representing Apple]’s view the profit is derived from three sources – technology, marketing and manufacturing. Only the manufacturing element relates to the Irish branch.
It's hard to view that "background information" as anything other than an implied threat that Apple was willing to leave Cork and those 1,500 employees unless it got a better tax deal, and the Irish government was happy to come to an agreement: it would only collect taxes on the parts of Apple's business related to manufacturing in Cork. Over the following decades Apple would come to use the non-manufacturing (and thus non-taxed) portion of its Cork subsidiary as a conduit for all of its non-U.S. revenue; that revenue stream has obviously grown humongously, while Cork's manufacturing revenue has not (although, and probably not coincidentally, it remains the only factory Apple owns), resulting in the European Commission's complaint that Apple only paid a tax rate of 0.005% on revenue from its Ireland-based Apple Sales International subsidiary.
That 0.005% number immediately raises red flags, but not the populist ones the European Commission is trying to wave: rather, there is no way that Apple could report a corporate tax rate of 26.4% in 2015 while simultaneously paying only 0.005% tax on its worldwide revenue. And, in fact, they are not: Apple Sales International revenue is being taxed, it's just not being collected, at least not yet. Understanding that makes the European Commission's actions a lot more difficult to understand.
How Apple Pays Taxes…Eventually
As I am painfully aware, the United States is one of the few countries in the world that taxes both its corporations and citizens on global income; in contrast, most other countries use a territorial system in which they only collect taxes on profit-generation that occurs in territory they control (the U.S. conveniently also uses a territorial system for foreign companies operating in U.S. territory). What this means for a company like Apple is that the U.S. taxes the company not only on profits made on sales in the U.S. but also sales in the rest of the world. The situation isn't quite as bad as it seems: any tax Apple pays to a foreign country is deducted from its U.S. taxes (more on this in a moment), but the key takeaway is that Apple owes the U.S. taxes for all of its profits worldwide, including those at Apple Sales International.
Now, there is one very important loophole that you have all head about: Apple doesn't actually pay those taxes until it brings the money back to the United States. Rather, it adds the taxes it owes as an accrued tax liability on its balance sheet ($24.1 billion at the end of FY 2015) and keeps the cash in the bank. Apple can then proceed to use that cash for its business (reducing its accrued tax liability) or, whenever it does repatriate it, send the U.S. Treasury its fair share. This is how you reconcile the tax percentage numbers I called out above: Apple reports a 26.4% effective tax rate, but when it comes to Apple Sales International its actual cash outflow when it comes to taxes is indeed 0.005%.
Remember, that 0.005% is about operations that are integral to Cork: Apple's contention is that everything else that happens at its Ireland subsidiaries is an extension of Apple's activities in the U.S. (primarily Cuptertino). Indeed, Apple Sales International already sends Apple around $2 billion a year to pay for its share of R&D costs (basically, Apple Sales International pays a percentage of Apples R&D spend equivalent to its percentage of Apple's overall revenue).
Apple's argument is that all of the value is created in the United States, and thus it is the U.S. that should collect the taxes (eventually). To that end the "head offices" that the European Commission is complaining about aren't untaxable, they're simply places to hold Apple's cash until it pays the U.S. And, by extension, Apple didn't form a special deal with Ireland, they simply came to an understanding that some of the work Apple did in Cork was in service of its U.S. business and thus not a part of Ireland's territory (and thus tax authority).
The European Commission has taken a much more narrow view in which Apple Sales International is its own company, one that pays significant licensing fees to Apple in the U.S. for the technology it then sells to customers all over the world (including the European Union); and, given that it is its own company operating in the territory of Ireland, it ought pay Irish income taxes.
Everyone is a Loser
Frankly, no one comes out of this mess looking good.
The European Commission
One of the weirder parts of this story is that the European Union has no authority on the tax policies of individual nation-states. However, the European Union does have authority to regulate competition, including banning state support of individual companies, and over the last few years the Commission has interpreted that to include tax breaks. That is the basis on which it is demanding Apple pay Ireland. It seems to this observer to be quite a stretch even in the Commission's narrow view of Apple Sales International as a standalone company: what advantage is Apple Sales International gaining relative to its competition?
The story is even uglier in the context of the actual role Apple Sales International plays in Apple's corporate structure: from this point of view there is absolutely nothing about Apple Sales International that has any bearing on the Commission's remit, nor anything Apple is guilty of beyond generating significant profits that the Commission feels entitled to and will retroactively change the rules to get.
Ireland
Ireland's government is in the unfortunate positions of having to argue that they don't want billions of dollars from Apple; that's pretty terrible politically. Their position is understandable, though; Apple could have set up this entity anywhere in the world, because their bottom-line position is exactly right: all of the value is created in the U.S. Because Ireland was cool with this they got thousands of jobs and a decent bit of cash for the trouble. Now that is all in jeopardy (not just from Apple, but from the many other American companies who have also set up their European operations in Ireland).
The United States
It is actually the United States, not Apple, that suffers the biggest financial hit if this stands up in court. Remember, Apple's plan all along was to pay taxes to the United States; being forced to pay Ireland instead will simply give them a big deduction in their U.S. tax bill. In other words, every single dollar that Apple pays Ireland is a transfer not from Apple to Ireland but from the U.S. Treasury to Ireland.
The U.S. has only itself to blame: the current state of the corporate tax code, with its artificially high base rate, global tax basis, and tax deferrment loophole leaves any CEO charged with fufilling his or her fiduciary duty no choice but to set up the sort of entities Apple has, particular given the moral hazard implicit in the fact the U.S. always ends up giving a repatriation tax break every decade or so. A saner corporate tax code would result in Apple and everyone else repatriating cash and paying taxes immediately, to the benefit of both the U.S. Treasury and the U.S. economy.
Apple
I'm 2,000 words in because this topic is so complex and, frankly, I'm not sure whether I've succeeded in making the case that Apple didn't do anything wrong here. And that's despite the fact I'm a professional analyst and you are all already very knowledgable about these topics! What chance does the company have to make its case to a skeptical public? This is a PR hit without question, but the company's management is doing exactly what they have a fiduciary duty to do.
That said.
It was a little over one year ago that Tim Cook made what I called an "unfair and unrealistic privacy speech" where he claimed that "morality demands" an approach to privacy that basically delegitimized Google and Facebook's business models (which Cook also characterized incorrectly). It was easy for Cook to say because Apple's business model is compatible with a very strict view of privacy, but needless to say Cook didn't make any allowance for that reality.
One could absolutely argue that Apple is morally wrong here: if they think their value is created in the U.S., then they should be repatrioting their money and paying their taxes, because it's the right thing to do. That they aren't shows that Cook's moralizing only goes as far as what is good for Apple's bottom line.
For me, I defend Google and Facebook's advertising model, and I defend Apple's right to follow the letter of the law in taxes, and I do think they're getting the short end of the stick from the Europrean Commission. Moreover, I'm hopeful this episode will finally lead to meaningful reform of the U.S. tax code. I do hope, though, that we can get a lot less moralizing along the way.
~~
I obviously disagree with that last part. I don't see how it's hypocritical to take moral stands on the issues he has while taking his position on this. Tim Cook is a capitalist (as am I), and if you're the CEO of the most valuable publicly traded company on Earth you'd have to be. It is not inconsistent to be strong on privacy, or LGBT rights, or the environment and want to pay the legally required minimum on taxes and not a penny more.
submitted by 420weed to apple [link] [comments]

New Lancet Report: Want to die sooner? Eat carbs!

https://www.nytimes.com/2017/09/08/well/new-study-favors-fat-over-carbs.html
Summary from The Lancet:
Summary Background The relationship between macronutrients and cardiovascular disease and mortality is controversial. Most available data are from European and North American populations where nutrition excess is more likely, so their applicability to other populations is unclear.
Methods The Prospective Urban Rural Epidemiology (PURE) study is a large, epidemiological cohort study of individuals aged 35–70 years (enrolled between Jan 1, 2003, and March 31, 2013) in 18 countries with a median follow-up of 7·4 years (IQR 5·3–9·3). Dietary intake of 135 335 individuals was recorded using validated food frequency questionnaires. The primary outcomes were total mortality and major cardiovascular events (fatal cardiovascular disease, non-fatal myocardial infarction, stroke, and heart failure). Secondary outcomes were all myocardial infarctions, stroke, cardiovascular disease mortality, and non-cardiovascular disease mortality. Participants were categorised into quintiles of nutrient intake (carbohydrate, fats, and protein) based on percentage of energy provided by nutrients. We assessed the associations between consumption of carbohydrate, total fat, and each type of fat with cardiovascular disease and total mortality. We calculated hazard ratios (HRs) using a multivariable Cox frailty model with random intercepts to account for centre clustering.
Findings During follow-up, we documented 5796 deaths and 4784 major cardiovascular disease events. Higher carbohydrate intake was associated with an increased risk of total mortality (highest [quintile 5] vs lowest quintile [quintile 1] category, HR 1·28 [95% CI 1·12–1·46], ptrend=0·0001) but not with the risk of cardiovascular disease or cardiovascular disease mortality. Intake of total fat and each type of fat was associated with lower risk of total mortality (quintile 5 vs quintile 1, total fat: HR 0·77 [95% CI 0·67–0·87], ptrend<0·0001; saturated fat, HR 0·86 [0·76–0·99], ptrend=0·0088; monounsaturated fat: HR 0·81 [0·71–0·92], ptrend<0·0001; and polyunsaturated fat: HR 0·80 [0·71–0·89], ptrend<0·0001). Higher saturated fat intake was associated with lower risk of stroke (quintile 5 vs quintile 1, HR 0·79 [95% CI 0·64–0·98], ptrend=0·0498). Total fat and saturated and unsaturated fats were not significantly associated with risk of myocardial infarction or cardiovascular disease mortality.
Interpretation High carbohydrate intake was associated with higher risk of total mortality, whereas total fat and individual types of fat were related to lower total mortality. Total fat and types of fat were not associated with cardiovascular disease, myocardial infarction, or cardiovascular disease mortality, whereas saturated fat had an inverse association with stroke. Global dietary guidelines should be reconsidered in light of these findings.
TL:DR -- "Eat carbs if you want to die sooner!"
submitted by JimDunlap to keto [link] [comments]

Biomedical literature – a primer of sorts.

After this discussion with RMFN the other night it appears I owe him a post. Which have become to far and few between lately and for that I apologize.
Sometimes life draws you a Tower card (omenofdread knows what I'm talkin' about) and despite how much you think you've prepared and contemplated zigs and zags and you still got nothin'.
Truth be told things have settled down (and so have I) and this place is better than ever despite me pulling back for the past few months and I couldn't be happier about it. Top notch – everyone – especially some new folks who have decided to jump in and start posting as well as some old friends coming in with a crazy 10-part (at least?) series that I purposefully haven't commented on because I didn't want to associate whatever is in there with whereever I have been at mentally over the past few weeks. I'll read it LetsHackReality, trust me. It's going to be a long hot summer, methinks in America and one that will last far longer than most think. But I digress!
I've come to take forgranted my skills that I have acquired through ascension in the American public (and private) education system to the zenith if you believe Ken Robinson's thesis here where the max level for education is the rank of professor (I never ranked up in my ivory tower as I resigned before it was time to level up). But one of my favorite classes to teach when I was there were the introductory courses on biomedical literature and specifically… clinical trials.
One of my favorite activities in my last year (when I knew I was going to bail and just did what I wanted, heh) was to have students track down news stories that referenced a 'new study' and put together a document regarding the news article itself as well as the study itself (full text of anything, ever, oh how I miss thee).
Now here I will attempt to give you all a crash course in how to read this stuff – and maybe someday soon we can try that activity I used to do with my students in the sub here (or we can make our own spot to play with the idea – kick that around and let me know if you're interested). Specifically I want to talk about drug trials. Clinical trials are meant to represent Science™ in a way and they are set up to have two groups, A and B, where one group gets the experimental drug and the other group gets a placebo (or an active control/what is generally accepted as standard treatment/etc). Statistics are used to determine how many people you would need in both groups to be able to determine that the result of the trial (whatever that is) is at least 95% most likely the result of the intervention (our new drug) versions chance alone.
The 95% shows up in a strange statistic – the p-value and alpha value which has a nice little summary here. We also have a statistic referred to as the beta and related to the power of the study. Rather than talk about these concepts as abstract things I figure we should just try and read something quickly and go from there. Let's take this (warning – PDF) study published within the last day or two from a mostly reputable journal – The New England Journal of Medicine.
In it – the authors studied the effects of a new fancy Astra Zeneca product, Ticagrelor (otherwise known as Brillinta in the US, which cost $333.56 (a magic number, take note) for one month compared to aspirin, which is pretty much fuckin' free. In the Abstract you can get a pretty good idea of the study itself, including the methods, the population studied, some numbers, and a brief conclusion. The Abstract is what happens to a generation who grew up on writing 5 Paragraph Essays decided to make TL;DRs.
We conducted an international double-blind, controlled trial in 674 centers in 33 countries, in which 13,199 patients with a nonsevere ischemic stroke or high-risk transient ischemic attack who had not received intravenous or intraarterial thrombolysis and were not considered to have had a cardioembolic stroke were randomly assigned within 24 hours after symptom onset, in a 1:1 ratio, to receive either ticagrelor (180 mg loading dose on day 1 followed by 90 mg twice daily for days 2 through 90) or aspirin (300 mg on day 1 followed by 100 mg daily for days 2 through 90). The primary end point was the time to the occurrence of stroke, myocardial infarction, or death within 90 days
Double-blind means that the study participants (the patients) and the study investigators (the authors) are unaware of whether the patient belongs to the ticagrelor group or the aspirin group. Their primary end point here (which is the result of the intervention) is whether or not patients had a stroke, a myocardial infarction (a heart attack) or died within 90 days. This is what is known as a composite endpoint – and they kind of suck. You have to dig deeper in the study itself to make sure that the intervention was statistically significant for all endpoints and not just 1 or 2 instead of all three. This can be a problem when study investigators lump in endpoints that aren't really the same (like getting hospitalized versus dying) although strokes, heart attacks and dying all suck pretty hard (with 1 being especially shitty). What does statistically significant mean?
During the 90 days of treatment, a primary end-point event occurred in 442 of the 6589 patients (6.7%) treated with ticagrelor, versus 497 of the 6610 patients (7.5%) treated with aspirin (hazard ratio, 0.89; 95% confidence interval [CI], 0.78 to 1.01; P=0.07). Ischemic stroke occurred in 385 patients (5.8%) treated with ticagrelor and in 441 patients (6.7%) treated with aspirin (hazard ratio, 0.87; 95% CI, 0.76 to 1.00). Major bleeding occurred in 0.5% of patients treated with ticagrelor and in 0.6% of patients treated with aspirin, intracranial hemorrhage in 0.2% and 0.3%, respectively, and fatal bleeding in 0.1% and 0.1%
A hazard ratio a fancy statistic that study investigators compile from their data (% of endpoint happening in group A compared to % of endpoint happening in group B). The 95% confidence interval is another way to interpret the data, although it is a bit more expansive and gives a better picture of what really happened in the trial. For this trial, they report it is between 0.78 to 1.01. If you notice – the hazard ratio reported is smack in the middle of this – so imagine that this hazard ratio is zenith of a bell curve and the Confidence Interval is the rest of the curve. Visually - here (ridiculously labeled bell curve on purpose because it's hilarious).
To read a hazard ratio and Confidence Interval in English – you have to convert it into a percentage. So a CI of 0.78 to 1.01 I would read and report as:
If you take ticagrelor instead of aspirin, after 90 days you have an 11% less likely chance to have a stroke, heart attack, or die compared to aspirin (0.89). However – 95% of the data suggests that this likelihood of the primary endpoint happening is actually between 22% less likely (0.78) to 1% more likely (1.01). This final statistic is troubling (for AstraZeneca) because it means that their product could potentially cause these primary endpoints, which is, if you recall, pretty shitty.
In our trial involving patients with acute ischemic stroke or transient ischemic attack, ticagrelor was not found to be superior to aspirin in reducing the rate of stroke, myocardial infarction, or death at 90 days.
Not statistically significant. The authors cannot say with 95% or more confidence that this drug actually helps.
We can go on and read the article a bit if you all like – there are certainly other things in here to look at. There are ways of manipulating the data (or excluding specific sets of patients or comparing the intervention to sub-therapeutic doses of standard therapy) to make it work for the study itself and this type of fraud certainly occurs. It is reported on sometimes (autism amirite) and sometimes not.
With all that being said – I will be around all day and weekend to bounce ideas off of and keep this going wherever it need be.
submitted by JamesColesPardon to C_S_T [link] [comments]

New NBER Working Papers This Week - August 22nd, 2016

For access to gated papers, make a request on /Scholar. Most papers can also be found, ungated, on their author's website.
Feel free to discuss any of these papers in the comments section below. Please refrain from reposting any of these papers to this sub.

Ungated

Health

Marketplace Plan Payment Options for Dealing with High-Cost Enrollees: Timothy J. Layton, Thomas G. McGuire
Two of the three elements of the ACA’s “premium stabilization program,” reinsurance and risk corridors, are set to expire in 2017, leaving risk adjustment alone to protect plans against risk of high-cost cases. This paper considers potential modifications of the HHS risk adjustment methodology to maintain plan protection against risk from high-cost cases within the current regulatory framework. We show analytically that modifications of the transfer formula and of the risk adjustment model itself are mathematically equivalent to a conventional actuarially fair reinsurance policy. Furthermore, closely related modifications of the transfer formula or the risk adjustment model can improve on conventional reinsurance by figuring transfers or estimating risk adjustment model weights recognizing the presence of a reinsurance function. In the empirical section, we estimate risk adjustment models with an updated and selected version of the data used to calibrate the federal payment models, and show, using simulation methods, that proposed modifications improve fit at the person level and protect small insurers against high-cost risk better than conventional reinsurance. We simulate various “attachment points” for the reinsurance equivalent policies and quantify the tradeoffs of higher and lower attachment points.

Trade

The More We Die, The More We Sell? A Simple Test of the Home-Market Effect: Arnaud Costinot, Dave Donaldson, Margaret Kyle, Heidi Williams
The home-market effect, first hypothesized by Linder (1961) and later formalized by Krugman (1980), is the idea that countries with larger demand for some products at home tend to have larger sales of the same products abroad. In this paper, we develop a simple test of the home-market effect using detailed drug sales data from the global pharmaceutical industry. The core of our empirical strategy is the observation that a country’s exogenous demographic composition can be used as a predictor of the diseases that its inhabitants are most likely to die from and, in turn, the drugs that they are most likely to demand. We find that the correlation between predicted home demand and sales abroad is positive and greater than the correlation between predicted home demand and purchases from abroad. In short, countries tend to be net sellers of the drugs that they demand the most, as predicted by Linder (1961) and Krugman (1980).

Gated

Development

The Persistent Power of Behavioral Change: Long-Run Impacts of Temporary Savings Subsidies for the Poor: Simone Schaner
I use a field experiment in rural Kenya to study how temporary incentives to save impact long-run economic outcomes. Study participants randomly selected to receive large temporary interest rates on an individual bank account had significantly more income and assets 2.5 years after the interest rates expired. These changes are much larger than the short-run impacts on experimental bank account use and almost entirely driven by growth in entrepreneurship. Temporary interest rates directed to joint bank accounts had no detectable long-run impacts on entrepreneurship or income, but increased investment in household public goods and spousal consensus over finances.
Unintended Consequences of Rewards for Student Attendance: Results from a Field Experiment in Indian Classrooms: Sujata Visaria, Rajeev Dehejia, Melody M. Chao, Anirban Mukhopadhyay
In an experiment in non-formal schools in Indian slums, a reward scheme for attending a target number of school days increased average attendance when the scheme was in place, but had heterogeneous effects after it was removed. Among students with high baseline attendance, the incentive had no effect on attendance after it was discontinued, and test scores were unaffected. Among students with low baseline attendance, the incentive lowered post-incentive attendance, and test scores decreased. For these students, the incentive was also associated with lower interest in school material and lower optimism and confidence about their ability. This suggests incentives might have unintended long-term consequences for the very students they are designed to help the most.
Can Natural Gas Save Lives? Evidence from the Deployment of a Fuel Delivery System in a Developing Country: Resul Cesur, Erdal Tekin, Aydogan Ulker
There has been a widespread displacement of coal by natural gas as space heating and cooking technology in Turkey in the last two decades, triggered by the deployment of natural gas networks. In this paper, we examine the impact of this development on mortality among adults and the elderly. Our research design exploits the variation in the timing of the deployment and the intensity of expansion of natural gas networks at the provincial level using data from 2001 to 2014. The results indicate that the expansion of natural gas services has caused significant reductions in both the adult and the elderly mortality rates. According to our point estimates, a one-percentage point increase in the rate of subscriptions to natural gas services would lower the overall mortality rate by 1.4 percent, the adult mortality rate by 1.9 percent, and the elderly mortality rate by 1.2 percent. These findings are supported by our auxiliary analysis, which demonstrates that the expansion of natural gas networks has indeed led to a significant improvement in air quality. Furthermore, we show that the mortality gains for both the adult and the elderly populations are primarily driven by reductions in cardio-respiratory deaths, which are more likely to be due to conditions caused or exacerbated by air pollution. Finally, our analysis does not reveal any important gender differences in the estimated relationship between the deployment of natural gas networks and mortality.

Environmental

The Impact of Removing Tax Preferences for U.S. Oil and Natural Gas Productioeasuring Tax Subsidies by an Equivalent Price Impact Approach: Gilbert E. Metcalf
This paper presents a novel methodology for estimating impacts on domestic supply of oil and natural gas arising from changes in the tax treatment of oil and gas production. It corrects a downward bias when the ratio of aggregate tax expenditures to domestic production is used to measure the subsidy value of tax preferences. That latter approach underestimates the value of the tax preferences to firms by ignoring the time value of money.
The paper introduces the concept of the equivalent price impact, the change in price that has the same impact on aggregate drilling decisions as a change in the tax provisions for oil and gas drilling and production. Using this approach I find that removing the three largest tax preferences for the oil and gas industry would likely have very modest impacts on global oil production, consumption or prices. Domestic oil and gas production is estimated to decline by 4 to 5 percent over the long run. Global oil prices would rise by less than one percent. Domestic natural gas prices are estimated to rise by 7 to 10 percent. Changes to these tax provisions would have modest to negligible impacts on greenhouse gas emissions or energy security.
Estimating Path Dependence in Energy Transitions: Kyle C. Meng
Addressing climate change requires transitioning away from coal-based energy. Recent structural change models demonstrate that temporary interventions could induce permanent fuel switching when transitional dynamics exhibit strong path dependence. Exploiting changes in local coal supply driven by subsurface coal accessibility, I find that transitory shocks have strengthening effects on the fuel composition of two subsequent generations of U.S. electricity capital. To facilitate a structural interpretation, I develop a model which informs: tests that find scale effects as the relevant mechanism; recovery of the elasticity of substitution between coal and non-coal electricity; and simulations of future carbon emissions following temporary interventions.
Trophy Hunting vs. Manufacturing Energy: The Price-Responsiveness of Shale Gas: Richard G. Newell, Brian C. Prest, Ashley Vissing
We analyze the relative price elasticity of unconventional versus conventional natural gas extraction. We separately analyze three key stages of gas production: drilling wells, completing wells, and producing natural gas from the completed wells. We find that the important margin is drilling investment, and neither production from existing wells nor completion times respond strongly to prices. We estimate a long-run drilling elasticity of 0.7 for both conventional and unconventional sources. Nonetheless, because unconventional wells produce on average 2.7 times more gas per well than conventional ones, the long-run price responsiveness of supply is almost 3 times larger for unconventional compared to conventional gas.
Price of Long-Run Temperature Shifts in Capital Markets: Ravi Bansal, Dana Kiku, Marcelo Ochoa
We use the forward-looking information from the US and global capital markets to estimate the economic impact of global warming, specifically, long-run temperature shifts. We find that global warming carries a positive risk premium that increases with the level of temperature and that has almost doubled over the last 80 years. Consistent with our model, virtually all US equity portfolios have negative exposure (beta) to long-run temperature fluctuations. The elasticity of equity prices to temperature risks across global markets is significantly negative and has been increasing in magnitude over time along with the rise in temperature. We use our empirical evidence to calibrate a long-run risks model with temperature-induced disasters in distant output growth to quantify the social cost of carbon emissions. The model simultaneously matches the projected temperature path, the observed consumption growth dynamics, discount rates provided by the risk-free rate and equity market returns, and the estimated temperature elasticity of equity prices. We find that the long-run impact of temperature on growth implies a significant social cost of carbon emissions.
What Would it Take to Reduce US Greenhouse Gas Emissions 80% by 2050?: Geoffrey Heal
I investigate the cost and feasibility of reducing US GHG emissions by 80% from 2005 levels by 2050. The US has stated in its Paris COP 21 submission that this is its aspiration, and Hillary Clinton has chosen this as one of the goals of her climate policy. I suggest that this goal can be reached at a cost in the range of $42 to $176 bn/year, but that it is challenging. I assume that the goal is to be reached by extensive use of solar PV and wind energy (66% of generating capacity), in which case the cost of energy storage plays a key role in the overall cost. I conclude tentatively that more limited use of renewables (less than 50%) together with increased use of nuclear power might be less costly.
Collective Intertemporal Choice: the Possibility of Time Consistency: Antony Millner, Geoffrey Heal
Recent work on collective intertemporal choice suggests that non-dictatorial social preferences are generically time inconsistent. We argue that this claim conflates time consistency with two distinct properties of preferences: stationarity and time invariance. While the conjunction of time invariance and stationarity implies time consistency, the converse does not hold. Although social preferences cannot be stationary, they may be time consistent if time invariance is abandoned. If individuals are discounted utilitarians, revealed preference provides no guidance on whether social preferences should be time consistent or time invariant. Nevertheless, we argue that time invariant social preferences are often normatively and descriptively problematic.

Finance

How Rigged Are Stock Markets?: Evidence From Microsecond Timestamps: Robert P. Bartlett, III, Justin McCrary
We use new timestamp data from the two Securities Information Processors (SIPs) to examine SIP reporting latencies for quote and trade reports. Reporting latencies average 1.13 milliseconds for quotes and 22.84 milliseconds for trades. Despite these latencies, liquidity-taking orders gain on average $0.0002 per share when priced at the SIP-reported national best bid or offer (NBBO) rather than the NBBO calculated using exchanges’ direct data feeds. Trading surrounding SIP-priced trades shows little evidence that fast traders initiate these liquidity-taking orders to pick-off stale quotes. These findings contradict claims that fast traders systematically exploit traders who transact at the SIP NBBO.
Measuring Institutional Investors' Skill from Their Investments in Private Equity: Daniel R. Cavagnaro, Berk A. Sensoy, Yingdi Wang, Michael S. Weisbach
Using a large sample of institutional investors’ private equity investments in venture and buyout funds, we estimate the extent to which investors’ skill affects returns from private equity investments. We first consider whether investors have differential skill by comparing the distribution of investors’ returns relative to the bootstrapped distribution that would occur if funds were randomly distributed across investors. We find that the variance of actual performance is higher than the bootstrapped distribution, suggesting that higher and lower skilled investors consistently outperform and underperform. We then use a Bayesian approach developed by Korteweg and Sorensen (2015) to estimate the incremental effect of skill on performance. The results imply that a one standard deviation increase in skill leads to about a three percentage point increase in returns, suggesting that variation in institutional investors’ skill is an important driver of their returns.
Geographic Diversification and Banks' Funding Costs: Ross Levine, Chen Lin, Wensi Xie
We assess the impact of the geographic expansion of bank assets on the cost of banks’ interest-bearing liabilities. Existing research suggests that expansion can both intensify agency problems that increase funding costs and facilitate risk diversification that decreases funding costs. Using a newly developed identification strategy, we discover that the geographic expansion of banks across U.S. states lowered their funding costs, especially when banks are headquartered in states with lower macroeconomic covariance with the overall U.S. economy. The results are consistent with the view that geographic expansion offers large risk diversification opportunities that reduce funding costs.
The I Theory of Money: Markus K. Brunnermeier, Yuliy Sannikov
A theory of money needs a proper place for financial intermediaries. Intermediaries diversify risks and create inside money. In downturns, micro-prudent intermediaries shrink their lending activity, fire-sell assets and supply less inside money, exactly when money demand rises. The resulting Fisher disinflation hurts intermediaries and other borrowers. Shocks are amplified, volatility spikes and risk premia rise. Monetary policy is redistributive. Accommodative monetary policy that boosts assets held by balance sheet-impaired sectors, recapitalizes them and mitigates the adverse liquidity and disinflationary spirals. Since monetary policy cannot provide insurance and control risk-taking separately, adding macroprudential policy that limits leverage attains higher welfare.
Risk Preferences and The Macro Announcement Premium: Hengjie Ai, Ravi Bansal
The paper develops a theory for equity premium around macroeconomic announcements. Stock returns realized around pre-scheduled macroeconomic announcements, such as the employment report and the FOMC statements, account for 55% of the market equity premium during the 1961-2014 period, and virtually 100% of it during the later period of 1997-2014, where more announcement data are available. We provide a characterization theorem for the set of intertemporal preferences that generate a positive announcement premium. Our theory establishes that the announcement premium identifies a significant deviation from expected utility and constitutes an asset market based evidence for a large class of non-expected models that features aversion to ”Knightian uncertainty”, for example, Gilboa and Schmeidler [30]. We also present a dynamic model to account for the evolution of equity premium around macroeconomic announcements.
Cash Flow Duration and the Term Structure of Equity Returns: Michael Weber
The term structure of equity returns is downward-sloping: stocks with high cash flow duration earn 1.10% per month lower returns than short-duration stocks in the cross section. I create a measure of cash flow duration at the firm level using balance sheet data to show this novel fact. Factor models can explain only 50% of the return differential, and the difference in returns is three times larger after periods of high investor sentiment. I use institutional ownership as a proxy for short-sale constraints, and find the negative cross-sectional relationship between cash flow duration and returns is only contained within short-sale constrained stocks.
Assessing Point Forecast Accuracy by Stochastic Error Distance: Francis X. Diebold, Minchul Shin
We propose point forecast accuracy measures based directly on distance of the forecast-error c.d.f. from the unit step function at 0 ("stochastic error distance," or SED). We provide a precise characterization of the relationship between SED and standard predictive loss functions, and we show that all such loss functions can be written as weighted SED's. The leading case is absolute-error loss. Among other things, this suggests shifting attention away from conditional-mean forecasts and toward conditional-median forecasts.

Health

Are Publicly Insured Children Less Likely to be Admitted to Hospital than the Privately Insured (and Does it Matter)?: Diane Alexander, Janet Currie
There is continuing controversy about the extent to which publicly insured children are treated differently than privately insured children, and whether differences in treatment matter. We show that on average, hospitals are less likely to admit publicly insured children than privately insured children who present at the ER and the gap grows during high flu weeks, when hospital beds are in high demand. This pattern is present even after controlling for detailed diagnostic categories and hospital fixed effects, but does not appear to have any effect on measurable health outcomes such as repeat ER visits and future hospitalizations. Hence, our results raise the possibility that instead of too few publicly insured children being admitted during high flu weeks, there are too many publicly and privately insured children being admitted most of the time.
Early Effects of the 2010 Affordable Care Act Medicaid Expansions on Federal Disability Program Participation: Pinka Chatterji, Yue Li
We test whether early Affordable Care Act (ACA) Medicaid expansions in Connecticut (CT), Minnesota (MN), California (CA), and the District of Columbia (DC) affected SSI applications, SSI and DI awards, and the number of SSI and DI beneficiaries. We use a difference-in-difference (DD) approach, comparing SSI/DI outcomes pre and post each early Medicaid expansion (“Early Expanders”) to SSI/DI outcomes in states that expanded Medicaid in January 2014 (“Later Expanders”). We also use a synthetic control approach, in which we examine SSI/DI outcomes before and after the Medicaid expansion in each Early Expander state, utilizing a weighted combination of Later Expanders as a comparison group. In CT, the Medicaid expansion is associated a statistically significant, 7 percent reduction in SSI beneficiaries; this finding is consistent across the DD and synthetic control methods. For DC, MN and CA, we do not find consistent evidence that the Medicaid expansions affected disability-related outcomes.
The Pros and Cons of Sick Pay Schemes: Testing for Contagious Presenteeism and Noncontagious Absenteeism Behavior: Stefan Pichler, Nicolas R. Ziebarth
This paper provides an analytical framework and uses data from the US and Germany to test for the existence of contagious presenteeism and negative externalities in sickness insurance schemes. The first part exploits high-frequency Google Flu data and the staggered implementation of U.S. sick leave reforms to show in a reduced-from framework that population-level influenza-like disease rates decrease after employees gain access to paid sick leave. Next, a simple theoretical framework provides evidence on the underlying behavioral mechanisms. The model theoretically decomposes overall behavioral labor supply adjustments ('moral hazard') into contagious presenteeism and noncontagious absenteeism behavior and derives testable conditions. The last part illustrates how to implement the model exploiting German sick pay reforms and administrative industry-level data on certified sick leave by diagnoses. It finds that the labor supply elasticity for contagious diseases is significantly smaller than for noncontagious diseases. Under the identifying assumptions of the model, in addition to the evidence from the U.S., this finding provides indirect evidence for the existence of contagious presenteeism.
Immunization and Moral Hazard: The HPV Vaccine and Uptake of Cancer Screening: Ali Moghtaderi, Avi Dor
Immunization can cause moral hazard by reducing the cost of risky behaviors. In this study, we examine the effect of HPV vaccination for cervical cancer on participation in the Pap test, which is a diagnostic screening test to detect potentially precancerous and cancerous process. It is strongly recommended for women between 21-65 years old even after taking the HPV vaccine. A reduction in willingness to have a Pap test as a result of HPV vaccination would signal the need for public health intervention. The HPV vaccination is recommended for women age eleven to twelve for regular vaccination or for women up to age 26 not vaccinated previously. We present evidence that probability of vaccination changes around this threshold. We identify the effect of vaccination using a fuzzy regression discontinuity design, centered on the recommended vaccination threshold age. The results show no evidence of ex ante moral hazard in the short-run. Sensitivity analyses using alternative specifications and subsamples are in general agreement. The estimates show that women who have been vaccinated are actually more likely to have a Pap test in the short-run, possibly due to increased awareness of its benefits.
The Rise in Life Expectancy, Health Trends among the Elderly, and the Demand for Care - A Selected Literature Review: Bjorn Lindgren
The objective is to review the evidence on (a) ageing and health and (b) the demand for health- and social services among the elderly. Issues are: does health status of the elderly improve over time, and how do the trends in health status of the elderly affect the demand for health- and elderly care? It is not a complete review, but it covers most of recent empirical studies.
The reviewed literature provides strong evidence that the prevalence of chronic disease among the elderly has increased over time. There is also fairly strong evidence that the consequences of disease have become less problematic due to medical progress: decreased mortality risk, milder and slower development over time, making the time with disease (and health-care treatment) longer but less troublesome than before. Evidence also suggests the postponement of functional limitations and disability. Some of the reduction in disability can be attributed to improvements in treatments of chronic diseases, but it is also due to the increased use of assistive technology, accessibility of buildings, etc. The results indicate that the ageing individual is expected to need health care for a longer period of time than previous generations but elderly care for a shorter.

IO

Does Organizational Form Drive Competition? Evidence from Coffee Retailing: Brian Adams, Joshua Gans, Richard Hayes, Ryan Lampe
This article examines patterns of entry and exit in a relatively homogeneous product market to investigate the impact of entry on incumbent firms and market structure. In particular, we are interested in whether the organizational form of entrants matters for the competitive decisions of incumbents. We assess the impact of chain stores on independent retailers in the Melbourne coffee market using annual data on the location and entry status of 4,768 coffee retailers between 1991 and 2010. The long panel enables us to include market fixed effects to address the endogeneity of store locations. Logit regressions indicate that chain stores have no discernible effect on the exit or entry decisions of independent stores. However, each additional chain store increases the probability of another chain store exiting by 2.5 percentage points, and each additional independent cafe increases the probability of another independent cafe exiting by 0.5 percent. These findings imply that neighboring independents and chains operate almost as though they are in separate markets. We offer additional analysis suggesting consumer information as a cause of this differentiation.

Labor

Born with a Silver Spoon? Danish Evidence on Wealth Inequality in Childhood: Simon Halphen Boserup, Wojciech Kopczuk, Claus Thustrup Kreiner
We study wealth inequality in childhood using Danish wealth records from three decades. While teenagers have some earnings, we estimate that transfers account for at least 50 percent of wealth at age 18, and much more so for the rich children. Inheritance from grandparents does not appear quantitatively important, but we do find evidence that children receive inter vivos transfers. While wealth holdings are small in childhood, they have strong predictive power for future wealth in adulthood. Asset holdings at age 18 are more informative than parental wealth in predicting wealth of children many years later when they are in their 40s. Hence, childhood wealth reveals significant heterogeneity in the intergenerational transmission of wealth, which is not simply captured by parental wealth alone. We investigate why this is the case and rule out that childhood wealth in itself can accumulate enough to explain later wealth inequality. Our evidence indicates that childhood wealth is a proxy for a broad set of circumstances related to intergenerational transmission and future wealth accumulation, including savings/investment behavior and additional transfers.
Human Capital Investments and Expectations about Career and Family: Matthew Wiswall, Basit Zafar
This paper studies how individuals "believe" human capital investments will affect their future career and family life. We conducted a survey of high-ability currently enrolled college students and elicited beliefs about how their choice of college major, and whether to complete their degree at all, would affect a wide array of future events, including future earnings, employment, marriage prospects, potential spousal characteristics, and fertility. We find that students perceive large "returns" to human capital not only in their own future earnings, but also in a number of other dimensions (such as future labor supply and potential spouse's earnings). In a recent follow-up survey conducted six years after the initial data collection, we find a close connection between the expectations and current realizations. Finally, we show that both the career and family expectations help explain human capital choices.
Long-Term Orientation and Educational Performance: David Figlio, Paola Giuliano, Umut Özek, Paola Sapienza
We use remarkable population-level administrative education and birth records from Florida to study the role of Long-Term Orientation on the educational attainment of immigrant students living in the US. Controlling for the quality of schools and individual characteristics, students from countries with long term oriented attitudes perform better than students from cultures that do not emphasize the importance of delayed gratification. These students perform better in third grade reading and math tests, have larger test score gains over time, have fewer absences and disciplinary incidents, are less likely to repeat grades, and are more likely to graduate from high school in four years. Also, they are more likely to enroll in advanced high school courses, especially in scientific subjects. Parents from long term oriented cultures are more likely to secure better educational opportunities for their children. A larger fraction of immigrants speaking the same language in the school amplifies the effect of Long-Term Orientation on educational performance. We validate these results using a sample of immigrant students living in 37 different countries.
Employment Effects of the ACA Medicaid Expansions: Pauline Leung, Alexandre Mas
We examine whether the recent expansions in Medicaid from the Affordable Care Act reduced “employment lock” among childless adults who were previously ineligible for public coverage. We compare employment in states that chose to expand Medicaid versus those that chose not to expand, before and after implementation. We find that although the expansion increased Medicaid coverage by 3.0 percentage points among childless adults, there was no significant impact on employment.
Family Descent as a Signal of Managerial Quality: Evidence from Mutual Funds: Oleg Chuprinin, Denis Sosyura
We study the relation between mutual fund managers’ family backgrounds and their professional performance. Using hand-collected data from individual Census records on the wealth and income of managers’ parents, we find that managers from poor families deliver higher alphas than managers from rich families. This result is robust to alternative measures of fund performance, such as benchmark-adjusted return and value extracted from capital markets. We argue that managers born poor face higher entry barriers into asset management, and only the most skilled succeed. Consistent with this view, managers born rich are more likely to be promoted, while those born poor are promoted only if they outperform. Overall, we establish the first link between family descent of investment professionals and their ability to create value.

Law and Economics

How Do Voters Matter? Evidence from US Congressional Redistricting: Daniel B. Jones, Randall Walsh
How does the partisan composition of an electorate impact the policies adopted by an elected representative? We take advantage of variation in the partisan composition of Congressional districts stemming from Census-initiated redistricting in the 1990’s, 2000’s, and 2010’s. Using this variation, we examine how an increase in Democrat share within a district impacts the district representative’s roll call voting. We find that an increase in Democrat share within a district causes more leftist roll call voting. This occurs because a Democrat is more likely to hold the seat, but also because – in contrast to existing empirical work – partisan composition has a direct effect on the roll call voting of individual representatives. This is true of both Democrats and Republicans. It is also true regardless of the nature of the redistricting (e.g., whether the redistricting was generated by a partisan or non-partisan process).
The Marginal Propensity to Consume Over the Business Cycle: Tal Gross, Matthew J. Notowidigdo, Jialan Wang
This paper estimates how the marginal propensity to consume (MPC) varies over the business cycle by exploiting exogenous variation in credit card borrowing limits. Ten years after an individual declares Chapter 7 bankruptcy, the record of the bankruptcy is removed from her credit report, generating an immediate and persistent increase in credit score. We study the effects of “bankruptcy flag” removal using a sample of over 160,000 bankruptcy filers whose flags were removed between 2004 and 2011. We document that in the year following flag removal, credit card limits increase by $780 and credit card balances increase by roughly $290, implying an “MPC out of liquidity” of 0.37. We find a significantly higher MPC during the Great Recession, with an average MPC roughly 20–30 percent larger between 2007 and 2009 compared to surrounding years. We find no evidence that the counter-cyclical variation in the average MPC is accounted for by compositional changes or by changes over time in the supply of credit following bankruptcy flag removal. These results are consistent with models where liquidity constraints bind more frequently during recessions.

Macro

Estimating Currency Misalignment Using the Penn Effect: It's Not as Simple As It Looks: Yin-Wong Cheung, Menzie Chinn, Xin Nong
We investigate the strength of the Penn effect in the most recent version of the Penn World Tables (PWTs). We find that the earlier findings of a Penn effect are confirmed, but that there is some evidence for nonlinearity. Developed and developing countries display different types of nonlinear behaviors. The nonlinear behaviors are likely attributable to differences across countries and do not change when additional control variables are added. We confirm earlier findings of large RMB misalignment in the mid-2000’s, but find that by 2011, the RMB seems near equilibrium. While the Penn effect is quite robust across datasets, estimated misalignment can noticeably change from a linear to a nonlinear specification, and from dataset to dataset.
From Chronic Inflation to Chronic Deflation: Focusing on Expectations and Liquidity Disarray Since WWII: Guillermo A. Calvo
The paper discusses policy relevant models, going from (1) chronic inflation in the 20th century after WWII, to (2) credit sudden stop episodes that got exacerbated in Developed Market economies after the 2008 Lehman crisis, and appear to be associated with chronic deflation. The discussion highlights the importance of expectations and liquidity, and warns about the risks of relegating liquidity to a secondary role, as has been the practice in mainstream macro models prior to the Great Recession.

Public Economics

Attention Variation and Welfare: Theory and Evidence from a Tax Salience Experiment: Dmitry Taubinsky, Alex Rees-Jones
This paper shows that accounting for variation in mistakes can be crucial for welfare analysis. Focusing on consumer underreaction to not-fully-salient sales taxes, we show theoretically that the efficiency costs of taxation are amplified by 1) individual differences in underreaction and 2) the degree to which attention is increasing with the size of the tax rate. To empirically assess the importance of these issues, we implement an online shopping experiment in which 2,998 consumers--matching the U.S. adult population on key demographics--purchase common household products, facing tax rates that vary in size and salience. We find that: 1) there are significant individual differences in underreaction to taxes. Accounting for this heterogeneity increases the efficiency cost of taxation estimates by at least 200%, as compared to estimates generated from a representative agent model. 2) Tripling existing sales tax rates roughly doubles consumers' attention to taxes. Our results provide new insights into the mechanisms and determinants of boundedly rational processing of not-fully-salient incentives, and our general approach provides a framework for robust behavioral welfare analysis.

Trade

Technology and Production Fragmentation: Domestic versus Foreign Sourcing: Teresa C. Fort
This paper provides direct empirical evidence on the relationship between technology and firms’ global sourcing strategies. Using new data on U.S. firms’ decisions to contract for manufacturing services from domestic or foreign suppliers, I show that changes in firm use of communication technology between 2002 to 2007 can explain almost one quarter of the increase in fragmentation over the period. The effect of firm technology also differs significantly across industries; in 2007, it is 20 percent higher, relative to the mean, in industries with production specifications that are easier to codify in an electronic format. These patterns suggest that technology lowers coordination costs, though its effect is disproportionately higher for domestic rather than foreign sourcing. The larger impact on domestic fragmentation highlights its importance as an alternative to offshoring, and can be explained by complementarities between technology and worker skill. High technology firms and industries are more likely to source from high human capital countries, and the differential impact of technology across industries is strongly increasing in country human capital.
Heterogeneous Frictional Costs Across Industries in Cross-border Mergers and Acquisitions: Bruce A. Blonigen, Donghyun Lee
While there has been significant research to explore the determinants (and frictions) of foreign direct investment (FDI), past literature primarily focuses on country-wide FDI patterns with little examination of sectoral heterogeneity in FDI. Anecdotally, there is substantial sectoral heterogeneity in FDI patterns. For example, a substantial share of FDI (around 40-50%) is in the manufacturing sector, yet manufacturing accounts for a relatively small share of production activity in the developed economies responsible for most cross-border M&A. In this paper, we extend the Head and Ries (2008) model of cross-border M&A to account for sectoral heterogeneity and estimate the varying effects of FDI frictions across sectors using cross-border M&A data spanning 1985 through 2013. We find that non-manufacturing sectors generally have greater sensitivity to cross-border M&A frictions than is true for manufacturing, including such frictions as physical distance, cultural distance, and common language. Tradeability is positively associated with greater cross-border M&A, and is an additional friction for the many non-manufacturing sectors because they consist of mainly non-tradeable goods.
submitted by IamA_GIffen_Good_AMA to EconPapers [link] [comments]

Associations of fats and carbohydrate intake with cardiovascular disease and mortality in 18 countries from five continents (PURE): a prospective cohort study

http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)32252-3/fulltext?elsca1=tlxpr
Background
The relationship between macronutrients and cardiovascular disease and mortality is controversial. Most available data are from European and North American populations where nutrition excess is more likely, so their applicability to other populations is unclear.
Methods
The Prospective Urban Rural Epidemiology (PURE) study is a large, epidemiological cohort study of individuals aged 35–70 years (enrolled between Jan 1, 2003, and March 31, 2013) in 18 countries with a median follow-up of 7·4 years (IQR 5·3–9·3). Dietary intake of 135 335 individuals was recorded using validated food frequency questionnaires. The primary outcomes were total mortality and major cardiovascular events (fatal cardiovascular disease, non-fatal myocardial infarction, stroke, and heart failure). Secondary outcomes were all myocardial infarctions, stroke, cardiovascular disease mortality, and non-cardiovascular disease mortality. Participants were categorised into quintiles of nutrient intake (carbohydrate, fats, and protein) based on percentage of energy provided by nutrients. We assessed the associations between consumption of carbohydrate, total fat, and each type of fat with cardiovascular disease and total mortality. We calculated hazard ratios (HRs) using a multivariable Cox frailty model with random intercepts to account for centre clustering.
Findings
During follow-up, we documented 5796 deaths and 4784 major cardiovascular disease events. Higher carbohydrate intake was associated with an increased risk of total mortality (highest [quintile 5] vs lowest quintile [quintile 1] category, HR 1·28 [95% CI 1·12–1·46], ptrend=0·0001) but not with the risk of cardiovascular disease or cardiovascular disease mortality. Intake of total fat and each type of fat was associated with lower risk of total mortality (quintile 5 vs quintile 1, total fat: HR 0·77 [95% CI 0·67–0·87], ptrend<0·0001; saturated fat, HR 0·86 [0·76–0·99], ptrend=0·0088; monounsaturated fat: HR 0·81 [0·71–0·92], ptrend<0·0001; and polyunsaturated fat: HR 0·80 [0·71–0·89], ptrend<0·0001). Higher saturated fat intake was associated with lower risk of stroke (quintile 5 vs quintile 1, HR 0·79 [95% CI 0·64–0·98], ptrend=0·0498). Total fat and saturated and unsaturated fats were not significantly associated with risk of myocardial infarction or cardiovascular disease mortality.
Interpretation
High carbohydrate intake was associated with higher risk of total mortality, whereas total fat and individual types of fat were related to lower total mortality. Total fat and types of fat were not associated with cardiovascular disease, myocardial infarction, or cardiovascular disease mortality, whereas saturated fat had an inverse association with stroke. Global dietary guidelines should be reconsidered in light of these findings.
submitted by greg_barton to ketoscience [link] [comments]

The IB Diplo㎃ Progra㎜e E㏇nomics ㏇u₨e for㎳ ㎩rt of group 3 - ㏌dⅳid㎂ʪ and sᅂieties. The study of e㏇nomics is essentiaᄔy about deal㏌g with scarcity, resourŒ aᄔᅂatюn and the methods and ㏚ᅂesses by which choiŒs are ㎃de ㏌ the satisfactюn of hu㎃n wanʦ. As a dy㎁mic sᅂial scienŒ, e㏇nomics ㎲es scientific metho

Economics is a social science concerned with the factors that determine the production, distribution, and consumption of goods and services. The term economics comes from the Ancient Greek οἰκονομία from οἶκος (oikos, "house") and νόμος (nomos, "custom" or "law"), hence "rules of the house (hold for good management)".[1] 'Political economy' was the earlier name for the subject, but economists in the late 19th century suggested "economics" as a shorter term for "economic science" to establish itself as a separate discipline outside of political science and other social sciences.[2]
Economics focuses on the behaviour and interactions of economic agents and how economies work. Consistent with this focus, primary textbooks often distinguish between microeconomics and macroeconomics. Microeconomics examines the behaviour of basic elements in the economy, including individual agents and markets, their interactions, and the outcomes of interactions. Individual agents may include, for example, households, firms, buyers, and sellers. Macroeconomics analyses the entire economy (meaning aggregated production, consumption, savings, and investment) and issues affecting it, including unemployment of resources (labour, capital, and land), inflation, economic growth, and the public policies that address these issues (monetary, fiscal, and other policies).
Other broad distinctions within economics include those between positive economics, describing "what is", and normative economics, advocating "what ought to be"; between economic theory and applied economics; between rational and behavioural economics; and between mainstream economics and heterodox economics.[3]
Besides the traditional concern in production, distribution, and consumption in an economy, economic analysis may be applied throughout society, as in business, finance, health care, and government. Economic analyses may also be applied to such diverse subjects as crime,[4] education,[5] the family, law, politics, religion,[6] social institutions, war,[7] science,[8] and the environment.[9] Education, for example, requires time, effort, and expenses, plus the foregone income and experience, yet these losses can be weighted against future benefits education may bring to the agent or the economy. At the turn of the 21st century, the expanding domain of economics in the social sciences has been described as economic imperialism.[10] The ultimate goal of economics is to improve the living conditions of people in their everyday life.[11]
There are a variety of modern definitions of economics. Some of the differences may reflect evolving views of the subject or different views among economists.[13] Scottish philosopher Adam Smith (1776) defined what was then called political economy as "an inquiry into the nature and causes of the wealth of nations", in particular as:
a branch of the science of a statesman or legislator [with the twofold objectives of providing] a plentiful revenue or subsistence for the people ... [and] to supply the state or commonwealth with a revenue for the publick services.[14] J.-B. Say (1803), distinguishing the subject from its public-policy uses, defines it as the science of production, distribution, and consumption of wealth.[15] On the satirical side, Thomas Carlyle (1849) coined "the dismal science" as an epithet for classical economics, in this context, commonly linked to the pessimistic analysis of Malthus (1798).[16] John Stuart Mill (1844) defines the subject in a social context as:
The science which traces the laws of such of the phenomena of society as arise from the combined operations of mankind for the production of wealth, in so far as those phenomena are not modified by the pursuit of any other object.[17] Alfred Marshall provides a still widely cited definition in his textbook Principles of Economics (1890) that extends analysis beyond wealth and from the societal to the microeconomic level:
Economics is a study of man in the ordinary business of life. It enquires how he gets his income and how he uses it. Thus, it is on the one side, the study of wealth and on the other and more important side, a part of the study of man.[18] Lionel Robbins (1932) developed implications of what has been termed "[p]erhaps the most commonly accepted current definition of the subject":[19]
Economics is a science which studies human behaviour as a relationship between ends and scarce means which have alternative uses.[20] Robbins describes the definition as not classificatory in "pick[ing] out certain kinds of behaviour" but rather analytical in "focus[ing] attention on a particular aspect of behaviour, the form imposed by the influence of scarcity."[21] He affirmed that previous economists have usually centred their studies on the analysis of wealth: how wealth is created (production), distributed, and consumed; and how wealth can grow.[22] But he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it (as a sought after end), generates both cost and benefits; and, resources (human life and other costs) are used to attain the goal. If the war is not winnable or if the expected costs outweigh the benefits, the deciding actors (assuming they are rational) may never go to war (a decision) but rather explore other alternatives. We cannot define economics as the science that studies wealth, war, crime, education, and any other field economic analysis can be applied to; but, as the science that studies a particular common aspect of each of those subjects (they all use scarce resources to attain a sought after end).
Some subsequent comments criticized the definition as overly broad in failing to limit its subject matter to analysis of markets. From the 1960s, however, such comments abated as the economic theory of maximizing behaviour and rational-choice modelling expanded the domain of the subject to areas previously treated in other fields.[23] There are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment.[24]
Gary Becker, a contributor to the expansion of economics into new areas, describes the approach he favours as "combin[ing the] assumptions of maximizing behaviour, stable preferences, and market equilibrium, used relentlessly and unflinchingly."[25] One commentary characterizes the remark as making economics an approach rather than a subject matter but with great specificity as to the "choice process and the type of social interaction that [such] analysis involves." The same source reviews a range of definitions included in principles of economics textbooks and concludes that the lack of agreement need not affect the subject-matter that the texts treat. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, or should evolve.[26]
Markets Main article: Markets A vegetable vendor in a marketplace. Economists study trade, production and consumption decisions, such as those that occur in a traditional marketplace. Two men sit at computer monitors with financial information. In Virtual Markets, buyer and seller are not present and trade via intermediates and electronic information. Pictured: São Paulo Stock Exchange, Brazil. Microeconomics examines how entities, forming a market structure, interact within a market to create a market system. These entities include private and public players with various classifications, typically operating under scarcity of tradable units and government regulation. The item traded may be a tangible product such as apples or a service such as repair services, legal counsel, or entertainment.
In theory, in a free market the aggregates (sum of) of quantity demanded by buyers and quantity supplied by sellers will be equal and reach economic equilibrium over time in reaction to price changes; in practice, various issues may prevent equilibrium, and any equilibrium reached may not necessarily be morally equitable. For example, if the supply of healthcare services is limited by external factors, the equilibrium price may be unaffordable for many who desire it but cannot pay for it.
Various market structures exist. In perfectly competitive markets, no participants are large enough to have the market power to set the price of a homogeneous product. In other words, every participant is a "price taker" as no participant influences the price of a product. In the real world, markets often experience imperfect competition.
Forms include monopoly (in which there is only one seller of a good), duopoly (in which there are only two sellers of a good), oligopoly (in which there are few sellers of a good), monopolistic competition (in which there are many sellers producing highly differentiated goods), monopsony (in which there is only one buyer of a good), and oligopsony (in which there are few buyers of a good). Unlike perfect competition, imperfect competition invariably means market power is unequally distributed. Firms under imperfect competition have the potential to be "price makers", which means that, by holding a disproportionately high share of market power, they can influence the prices of their products.
Microeconomics studies individual markets by simplifying the economic system by assuming that activity in the market being analysed does not affect other markets. This method of analysis is known as partial-equilibrium analysis (supply and demand). This method aggregates (the sum of all activity) in only one market. General-equilibrium theory studies various markets and their behaviour. It aggregates (the sum of all activity) across all markets. This method studies both changes in markets and their interactions leading towards equilibrium.[27]
Production, cost, and efficiency Main articles: Production theory basics, Opportunity cost, Economic efficiency, and Production–possibility frontier In microeconomics, production is the conversion of inputs into outputs. It is an economic process that uses inputs to create a commodity or a service for exchange or direct use. Production is a flow and thus a rate of output per period of time. Distinctions include such production alternatives as for consumption (food, haircuts, etc.) vs. investment goods (new tractors, buildings, roads, etc.), public goods (national defence, smallpox vaccinations, etc.) or private goods (new computers, bananas, etc.), and "guns" vs "butter".
Opportunity cost refers to the economic cost of production: the value of the next best opportunity foregone. Choices must be made between desirable yet mutually exclusive actions. It has been described as expressing "the basic relationship between scarcity and choice".[28] For example, if a baker uses a sack of flour to make pretzels one morning, then the baker cannot use either the flour or the morning to make bagels instead. Part of the cost of making pretzels is that neither the flour nor the morning are available any longer, for use in some other way. The opportunity cost of an activity is an element in ensuring that scarce resources are used efficiently, such that the cost is weighed against the value of that activity in deciding on more or less of it. Opportunity costs are not restricted to monetary or financial costs but could be measured by the real cost of output forgone, leisure, or anything else that provides the alternative benefit (utility).[29]
Inputs used in the production process include such primary factors of production as labour services, capital (durable produced goods used in production, such as an existing factory), and land (including natural resources). Other inputs may include intermediate goods used in production of final goods, such as the steel in a new car.
Economic efficiency describes how well a system generates desired output with a given set of inputs and available technology. Efficiency is improved if more output is generated without changing inputs, or in other words, the amount of "waste" is reduced. A widely accepted general standard is Pareto efficiency, which is reached when no further change can make someone better off without making someone else worse off.
An example production–possibility frontier with illustrative points marked. The production–possibility frontier (PPF) is an expository figure for representing scarcity, cost, and efficiency. In the simplest case an economy can produce just two goods (say "guns" and "butter"). The PPF is a table or graph (as at the right) showing the different quantity combinations of the two goods producible with a given technology and total factor inputs, which limit feasible total output. Each point on the curve shows potential total output for the economy, which is the maximum feasible output of one good, given a feasible output quantity of the other good.
Scarcity is represented in the figure by people being willing but unable in the aggregate to consume beyond the PPF (such as at X) and by the negative slope of the curve.[30] If production of one good increases along the curve, production of the other good decreases, an inverse relationship. This is because increasing output of one good requires transferring inputs to it from production of the other good, decreasing the latter.
The slope of the curve at a point on it gives the trade-off between the two goods. It measures what an additional unit of one good costs in units forgone of the other good, an example of a real opportunity cost. Thus, if one more Gun costs 100 units of butter, the opportunity cost of one Gun is 100 Butter. Along the PPF, scarcity implies that choosing more of one good in the aggregate entails doing with less of the other good. Still, in a market economy, movement along the curve may indicate that the choice of the increased output is anticipated to be worth the cost to the agents.
By construction, each point on the curve shows productive efficiency in maximizing output for given total inputs. A point inside the curve (as at A), is feasible but represents production inefficiency (wasteful use of inputs), in that output of one or both goods could increase by moving in a northeast direction to a point on the curve. Examples cited of such inefficiency include high unemployment during a business-cycle recession or economic organization of a country that discourages full use of resources. Being on the curve might still not fully satisfy allocative efficiency (also called Pareto efficiency) if it does not produce a mix of goods that consumers prefer over other points.
Much applied economics in public policy is concerned with determining how the efficiency of an economy can be improved. Recognizing the reality of scarcity and then figuring out how to organize society for the most efficient use of resources has been described as the "essence of economics", where the subject "makes its unique contribution."[31]
Specialization Main articles: Division of labour, Comparative advantage, and Gains from trade
A map showing the main trade routes for goods within late medieval Europe. Specialization is considered key to economic efficiency based on theoretical and empirical considerations. Different individuals or nations may have different real opportunity costs of production, say from differences in stocks of human capital per worker or capital/labour ratios. According to theory, this may give a comparative advantage in production of goods that make more intensive use of the relatively more abundant, thus relatively cheaper, input.
Even if one region has an absolute advantage as to the ratio of its outputs to inputs in every type of output, it may still specialize in the output in which it has a comparative advantage and thereby gain from trading with a region that lacks any absolute advantage but has a comparative advantage in producing something else.
It has been observed that a high volume of trade occurs among regions even with access to a similar technology and mix of factor inputs, including high-income countries. This has led to investigation of economies of scale and agglomeration to explain specialization in similar but differentiated product lines, to the overall benefit of respective trading parties or regions.[32]
The general theory of specialization applies to trade among individuals, farms, manufacturers, service providers, and economies. Among each of these production systems, there may be a corresponding division of labour with different work groups specializing, or correspondingly different types of capital equipment and differentiated land uses.[33]
An example that combines features above is a country that specializes in the production of high-tech knowledge products, as developed countries do, and trades with developing nations for goods produced in factories where labour is relatively cheap and plentiful, resulting in different in opportunity costs of production. More total output and utility thereby results from specializing in production and trading than if each country produced its own high-tech and low-tech products.
Theory and observation set out the conditions such that market prices of outputs and productive inputs select an allocation of factor inputs by comparative advantage, so that (relatively) low-cost inputs go to producing low-cost outputs. In the process, aggregate output may increase as a by-product or by design.[34] Such specialization of production creates opportunities for gains from trade whereby resource owners benefit from trade in the sale of one type of output for other, more highly valued goods. A measure of gains from trade is the increased income levels that trade may facilitate.[35]
Supply and demand Main article: Supply and demand A graph depicting Quantity on the X-axis and Price on the Y-axis The supply and demand model describes how prices vary as a result of a balance between product availability and demand. The graph depicts an increase (that is, right-shift) in demand from D1 to D2 along with the consequent increase in price and quantity required to reach a new equilibrium point on the supply curve (S). Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy.[36] The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power.
For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximization" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesized relation of each individual consumer for ranking different commodity bundles as more or less preferred.
The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply.
Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesized to be profit-maximizers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged.
That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors inputs of production are all taken to be constant for a specific time period of evaluation of supply.
Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilize at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply.
For a given quantity of a consumer good, the point on the demand curve indicates the value, or marginal utility, to consumers for that unit. It measures what the consumer would be prepared to pay for that unit.[37] The corresponding point on the supply curve measures marginal cost, the increase in total cost to the supplier for the corresponding unit of the good. The price in equilibrium is determined by supply and demand. In a perfectly competitive market, supply and demand equate marginal cost and marginal utility at equilibrium.[38]
On the supply side of the market, some factors of production are described as (relatively) variable in the short run, which affects the cost of changing output levels. Their usage rates can be changed easily, such as electrical power, raw-material inputs, and over-time and temp work. Other inputs are relatively fixed, such as plant and equipment and key personnel. In the long run, all inputs may be adjusted by management. These distinctions translate to differences in the elasticity (responsiveness) of the supply curve in the short and long runs and corresponding differences in the price-quantity change from a shift on the supply or demand side of the market.
Marginalist theory, such as above, describes the consumers as attempting to reach most-preferred positions, subject to income and wealth constraints while producers attempt to maximize profits subject to their own constraints, including demand for goods produced, technology, and the price of inputs. For the consumer, that point comes where marginal utility of a good, net of price, reaches zero, leaving no net gain from further consumption increases. Analogously, the producer compares marginal revenue (identical to price for the perfect competitor) against the marginal cost of a good, with marginal profit the difference. At the point where marginal profit reaches zero, further increases in production of the good stop. For movement to market equilibrium and for changes in equilibrium, price and quantity also change "at the margin": more-or-less of something, rather than necessarily all-or-nothing.
Other applications of demand and supply include the distribution of income among the factors of production, including labour and capital, through factor markets. In a competitive labour market for example the quantity of labour employed and the price of labour (the wage rate) depends on the demand for labour (from employers for production) and supply of labour (from potential workers). Labour economics examines the interaction of workers and employers through such markets to explain patterns and changes of wages and other labour income, labour mobility, and (un)employment, productivity through human capital, and related public-policy issues.[39]
Demand-and-supply analysis is used to explain the behaviour of perfectly competitive markets, but as a standard of comparison it can be extended to any type of market. It can also be generalized to explain variables across the economy, for example, total output (estimated as real GDP) and the general price level, as studied in macroeconomics.[40] Tracing the qualitative and quantitative effects of variables that change supply and demand, whether in the short or long run, is a standard exercise in applied economics. Economic theory may also specify conditions such that supply and demand through the market is an efficient mechanism for allocating resources.[41]
Firms Main articles: Theory of the firm, Industrial organization, Business economics, and Managerial economics People frequently do not trade directly on markets. Instead, on the supply side, they may work in and produce through firms. The most obvious kinds of firms are corporations, partnerships and trusts. According to Ronald Coase, people begin to organize their production in firms when the costs of doing business becomes lower than doing it on the market.[42] Firms combine labour and capital, and can achieve far greater economies of scale (when the average cost per unit declines as more units are produced) than individual market trading.
In perfectly competitive markets studied in the theory of supply and demand, there are many producers, none of which significantly influence price. Industrial organization generalizes from that special case to study the strategic behaviour of firms that do have significant control of price. It considers the structure of such markets and their interactions. Common market structures studied besides perfect competition include monopolistic competition, various forms of oligopoly, and monopoly.[43]
Managerial economics applies microeconomic analysis to specific decisions in business firms or other management units. It draws heavily from quantitative methods such as operations research and programming and from statistical methods such as regression analysis in the absence of certainty and perfect knowledge. A unifying theme is the attempt to optimize business decisions, including unit-cost minimization and profit maximization, given the firm's objectives and constraints imposed by technology and market conditions.[44]
Uncertainty and game theory Main articles: Information economics, Game theory, and Financial economics Uncertainty in economics is an unknown prospect of gain or loss, whether quantifiable as risk or not. Without it, household behaviour would be unaffected by uncertain employment and income prospects, financial and capital markets would reduce to exchange of a single instrument in each market period, and there would be no communications industry.[45] Given its different forms, there are various ways of representing uncertainty and modelling economic agents' responses to it.[46]
Game theory is a branch of applied mathematics that considers strategic interactions between agents, one kind of uncertainty. It provides a mathematical foundation of industrial organization, discussed above, to model different types of firm behaviour, for example in an oligopolistic industry (few sellers), but equally applicable to wage negotiations, bargaining, contract design, and any situation where individual agents are few enough to have perceptible effects on each other. As a method heavily used in behavioural economics, it postulates that agents choose strategies to maximize their payoffs, given the strategies of other agents with at least partially conflicting interests.[47][48]
In this, it generalizes maximization approaches developed to analyse market actors such as in the supply and demand model and allows for incomplete information of actors. The field dates from the 1944 classic Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. It has significant applications seemingly outside of economics in such diverse subjects as formulation of nuclear strategies, ethics, political science, and evolutionary biology.[49]
Risk aversion may stimulate activity that in well-functioning markets smooths out risk and communicates information about risk, as in markets for insurance, commodity futures contracts, and financial instruments. Financial economics or simply finance describes the allocation of financial resources. It also analyses the pricing of financial instruments, the financial structure of companies, the efficiency and fragility of financial markets,[50] financial crises, and related government policy or regulation.[51]
Some market organizations may give rise to inefficiencies associated with uncertainty. Based on George Akerlof's "Market for Lemons" article, the paradigm example is of a dodgy second-hand car market. Customers without knowledge of whether a car is a "lemon" depress its price below what a quality second-hand car would be.[52] Information asymmetry arises here, if the seller has more relevant information than the buyer but no incentive to disclose it. Related problems in insurance are adverse selection, such that those at most risk are most likely to insure (say reckless drivers), and moral hazard, such that insurance results in riskier behaviour (say more reckless driving).[53]
Both problems may raise insurance costs and reduce efficiency by driving otherwise willing transactors from the market ("incomplete markets"). Moreover, attempting to reduce one problem, say adverse selection by mandating insurance, may add to another, say moral hazard. Information economics, which studies such problems, has relevance in subjects such as insurance, contract law, mechanism design, monetary economics, and health care.[53] Applied subjects include market and legal remedies to spread or reduce risk, such as warranties, government-mandated partial insurance, restructuring or bankruptcy law, inspection, and regulation for quality and information disclosure.[54][47]
Market failure Main articles: Market failure, Government failure, Information economics, Environmental economics, and Agricultural economics A smokestack releasing smoke Pollution can be a simple example of market failure. If costs of production are not borne by producers but are by the environment, accident victims or others, then prices are distorted. The term "market failure" encompasses several problems which may undermine standard economic assumptions. Although economists categorize market failures differently, the following categories emerge in the main texts.[55]
Information asymmetries and incomplete markets may result in economic inefficiency but also a possibility of improving efficiency through market, legal, and regulatory remedies, as discussed above.
Natural monopoly, or the overlapping concepts of "practical" and "technical" monopoly, is an extreme case of failure of competition as a restraint on producers. Extreme economies of scale are one possible cause.
Public goods are goods which are undersupplied in a typical market. The defining features are that people can consume public goods without having to pay for them and that more than one person can consume the good at the same time.
Externalities occur where there are significant social costs or benefits from production or consumption that are not reflected in market prices. For example, air pollution may generate a negative externality, and education may generate a positive externality (less crime, etc.). Governments often tax and otherwise restrict the sale of goods that have negative externalities and subsidize or otherwise promote the purchase of goods that have positive externalities in an effort to correct the price distortions caused by these externalities.[56] Elementary demand-and-supply theory predicts equilibrium but not the speed of adjustment for changes of equilibrium due to a shift in demand or supply.[57]
In many areas, some form of price stickiness is postulated to account for quantities, rather than prices, adjusting in the short run to changes on the demand side or the supply side. This includes standard analysis of the business cycle in macroeconomics. Analysis often revolves around causes of such price stickiness and their implications for reaching a hypothesized long-run equilibrium. Examples of such price stickiness in particular markets include wage rates in labour markets and posted prices in markets deviating from perfect competition.
A woman takes samples of water from a river. Environmental scientist sampling water Some specialized fields of economics deal in market failure more than others. The economics of the public sector is one example. Much environmental economics concerns externalities or "public bads".
Policy options include regulations that reflect cost-benefit analysis or market solutions that change incentives, such as emission fees or redefinition of property rights.[58]
Public sector Main articles: Economics of the public sector and Public finance See also: Welfare economics Public finance is the field of economics that deals with budgeting the revenues and expenditures of a public sector entity, usually government. The subject addresses such matters as tax incidence (who really pays a particular tax), cost-benefit analysis of government programmes, effects on economic efficiency and income distribution of different kinds of spending and taxes, and fiscal politics. The latter, an aspect of public choice theory, models public-sector behaviour analogously to microeconomics, involving interactions of self-interested voters, politicians, and bureaucrats.[59]
Much of economics is positive, seeking to describe and predict economic phenomena. Normative economics seeks to identify what economies ought to be like.
Welfare economics is a normative branch of economics that uses microeconomic techniques to simultaneously determine the allocative efficiency within an economy and the income distribution associated with it. It attempts to measure social welfare by examining the economic activities of the individuals that comprise society.[60]
Macroeconomics examines the economy as a whole to explain broad aggregates and their interactions "top down", that is, using a simplified form of general-equilibrium theory.[61] Such aggregates include national income and output, the unemployment rate, and price inflation and subaggregates like total consumption and investment spending and their components. It also studies effects of monetary policy and fiscal policy.
Since at least the 1960s, macroeconomics has been characterized by further integration as to micro-based modelling of sectors, including rationality of players, efficient use of market information, and imperfect competition.[62] This has addressed a long-standing concern about inconsistent developments of the same subject.[63]
Macroeconomic analysis also considers factors affecting the long-term level and growth of national income. Such factors include capital accumulation, technological change and labour force growth.[64]
Growth Main article: Economic growth Growth economics studies factors that explain economic growth – the increase in output per capita of a country over a long period of time. The same factors are used to explain differences in the level of output per capita between countries, in particular why some countries grow faster than others, and whether countries converge at the same rates of growth.
Much-studied factors include the rate of investment, population growth, and technological change. These are represented in theoretical and empirical forms (as in the neoclassical and endogenous growth models) and in growth accounting.[65]
Business cycle Main article: Business cycle See also: Circular flow of income, Aggregate supply, Aggregate demand, and Unemployment
A basic illustration of economic/business cycles. The economics of a depression were the spur for the creation of "macroeconomics" as a separate discipline field of study. During the Great Depression of the 1930s, John Maynard Keynes authored a book entitled The General Theory of Employment, Interest and Money outlining the key theories of Keynesian economics. Keynes contended that aggregate demand for goods might be insufficient during economic downturns, leading to unnecessarily high unemployment and losses of potential output.
He therefore advocated active policy responses by the public sector, including monetary policy actions by the central bank and fiscal policy actions by the government to stabilize output over the business cycle.[66] Thus, a central conclusion of Keynesian economics is that, in some situations, no strong automatic mechanism moves output and employment towards full employment levels. John Hicks' IS/LM model has been the most influential interpretation of The General Theory.
Over the years, understanding of the business cycle has branched into various research programmes, mostly related to or distinct from Keynesianism. The neoclassical synthesis refers to the reconciliation of Keynesian economics with neoclassical economics, stating that Keynesianism is correct in the short run but qualified by neoclassical-like considerations in the intermediate and long run.[67]
New classical macroeconomics, as distinct from the Keynesian view of the business cycle, posits market clearing with imperfect information. It includes Friedman's permanent income hypothesis on consumption and "rational expectations" theory,[68] led by Robert Lucas, and real business cycle theory.[69]
In contrast, the new Keynesian approach retains the rational expectations assumption, however it assumes a variety of market failures. In particular, New Keynesians assume prices and wages are "sticky", which means they do not adjust instantaneously to changes in economic conditions.[70]
Thus, the new classicals assume that prices and wages adjust automatically to attain full employment, whereas the new Keynesians see full employment as being automatically achieved only in the long run, and hence government and central-bank policies are needed because the "long run" may be very long.
Unemployment Main article: Unemployment
The percentage of the US population employed, 1995–2012. The amount of unemployment in an economy is measured by the unemployment rate, the percentage of workers without jobs in the labour force. The labour force only includes workers actively looking for jobs. People who are retired, pursuing education, or discouraged from seeking work by a lack of job prospects are excluded from the labour force. Unemployment can be generally broken down into several types that are related to different causes.[71]
Classical models of unemployment occurs when wages are too high for employers to be willing to hire more workers. Wages may be too high because of minimum wage laws or union activity. Consistent with classical unemployment, frictional unemployment occurs when appropriate job vacancies exist for a worker, but the length of time needed to search for and find the job leads to a period of unemployment.[71]
Structural unemployment covers a variety of possible causes of unemployment including a mismatch between workers' skills and the skills required f
submitted by commiemod to Price [link] [comments]

hazard ratio interpretation percentage video

12 RR ve OR Oranı Hazard Ratios - Fares Alahdab MD - YouTube Hazard Ratios and Survival Curves - YouTube Kaplan Meier curve and hazard ratio tutorial (Kaplan Meier ... How to interpret a survival plot - YouTube Hazard ratio - YouTube Calculating Hazard Ratios [Survival Analysis] - YouTube TIPS AND TRICKS OF PERCENTAGE FOR DATA INTERPRETATION ... Hazard Ratio - YouTube

Hazard Ratio Age group Cause-speci c HR P-value 95% CI 18-59 1.00 - - 60-84 0.96 0.073 0.92 to 1.01 85+ 2.11 <0.001 1.93 to 2.32 Table:Cause-speci c hazard ratios for breast cancer. Sally R. Hinchli e University of Leicester, 2012 14 / 34 The Hazard ratio (HR) is one of the measures that in clinical research are most often difficult to interpret for students and researchers. In this post we will try to explain this measure in terms of its practical use. You should know what the Hazard Ratio is, but we will repeat it again. Let’s take […] Hazard ratio is not always valid …. Nelson-Aalen cumulative hazard estimates, by group analysis time 0 10 20 30 40 0.00 1.00 2.00 3.00 4.00 group 0 group 1 Hazard Ratio = .71 Kaplan-Meier survival estimates, by group analysis time 0 10 20 30 40 0.00 0.25 0.50 0.75 1.00 group 0 group 1 The numerical value can be a fraction of 1.0 or it can be greater than 1.0. For example, a hazard ratio of 0.70 means that the study drug provides 30% risk reduction compared to the control treatment (25). A hazard ratio of exactly 1.0 means that the study drug provides zero risk reduction, compared to the control treatment. The hazard ratio, sometimes called a relative hazard, is typically used to compare time to event data between two treatment groups. The hazard ratio of death for the intervention group compared with the control group was 0.46 (0.22 to 0.95). As time progresses, percentage survival decreases in both groups. Plotting curves on the graphs allows statistical analysis to be performed to calculate the hazard (absolute risk over time) for each group. Dividing the hazard in the treatment group by the hazard in the control group produces the hazard ratio. Hazard ratio is reported most commonly in time-to-event analysis or survival analysis (i.e. when we are interested in knowing how long it takes for a particular event/outcome to occur). Hazard ratio can be obtained and calculated from the Cox regression - or Cox proportional hazard regression model. Each one faces an annual risk of death, whose technical name is their ‘hazard’. Therefore a hazard ratio of 1.13 means that, for two people like Mike and Sam who are similar apart from the extra meat, the one with the risk factor – Mike - has a 13% increased annual risk of death over the follow-up period (around 20 years). For Cox models where you want to express a hazard ratio for some particular percentage change in a continuous predictor, it can be useful to make an appropriate change of base of the logarithm before you perform the regression. For example, $\log_22$=1, so a doubling of cost would represent a 1-unit increase in the $\log_2$ scale. Hazard Ratio Calculator. Use this hazard ratio calculator to easily calculate the relative hazard, confidence intervals and p-values for the hazard ratio (HR) between an exposed/treatment and control group. One and two-sided confidence intervals are reported, as well as Z-scores based on the log-rank test.

hazard ratio interpretation percentage top

[index] [6977] [1930] [9280] [3039] [7621] [1544] [1472] [2125] [2680] [1464]

12 RR ve OR Oranı

A brief conceptual introduction to hazard ratios and survival curves (also known as Kaplan Meier plots). Hopefully this gives you the information you need to... The Kaplan Meier (Kaplan-Meier) curve is frequently used to perform time-to-event analysis in the medical literature. The Kaplan Meier curve, also known as ... IN this video we are describing you tips and tricks of percentage for data interpretation. . Most of the exams including SBI PO, IBPS,CLERK, RAILWAYS,SSC CG... This short video describes how to interpret a survival plot. Please post any comments or questions below, or at our Statistics for Citizen Scientists group: ... Kaplan Meier curve and hazard ratio tutorial (Kaplan Meier curve and hazard ratio made simple!) - Duration: 52:54. Eric McCoy 16,925 views Ref: Student4bestevidence.net How to calculate the hazard ratio of two groups' survival times.Thanks for watching!! ️♫ Eric Skiff - Chibi Ninjahttp://freemusicarchive.org/music/Eric_Skif... In survival analysis, the hazard ratio (HR) is the ratio of the hazard rates corresponding to the conditions described by two levels of an explanatory variab... This is a short presentation on hazard ratio, its uses, interpretation, and a talk about some relevant concepts.

hazard ratio interpretation percentage

Copyright © 2024 m.kasinobest.shop