- This topic has 1 voice and 0 replies.
-
AuthorPosts
-
March 9, 2014 at 2:48 am #10833
RJH76MemberHere's a bomb for ya'll to pick through over the weekend. Looks like eating breakfast is positively correlated with greater physical activity in young teens. So, any extra fat burning could be offset by fewer calories burned in the day; at least when greater activity is possible (not in class or desk job). http://ajcn.nutrition.org/content/99/2/361.long
Abstract: Background: The association between breakfast consumption and physical activity (PA) is inconclusive.Objective: We aimed to investigate daily associations and hourly patterns of PA and breakfast consumption in British adolescents.Design: Daily PA [accelerometry-derived moderate and vigorous physical activity (MVPA)] and breakfast consumption (diet diary) were measured simultaneously over 4 d in 860 adolescents (boys: 43.4%; mean ± SD age: 14.5 ± 0.5 y). Associations between MVPA and breakfast consumption were assessed by using a multilevel mixed-effects logistic regression separately by sex and for weekends and weekdays. Hourly patterns of MVPA by breakfast consumption status were displayed graphically, and differences were tested by using ANOVA. Multilevel linear regression was used to investigate differences in log MVPA on days when 570 inconsistent breakfast consumers ate or skipped breakfast.Results: On weekends, boys and girls with higher MVPA were more likely to eat breakfast [OR (95% CI): boys, 1.78 (1.30, 2.45) (P < 0.001); girls, 2.30 (1.66, 3.08) (P < 0.001)] when adjusted for socioeconomic status, percentage of body fat, and total energy intake. Peak hourly MVPA differed for breakfast consumers compared with nonconsumers on weekends (P < 0.001). Inconsistent breakfast consumers did more MVPA on days when they ate breakfast [exponentiated β coefficients (95% CIs): 1.2 (1.0, 1.5) on weekdays and 1.4 (1.1, 1.8) on weekends for boys and 1.6 (1.3, 2.1) on weekends for girls; all P < 0.03].Conclusions: Eating breakfast was associated with higher MVPA on weekends. The time of peak MVPA differed between breakfast consumers and nonconsumers on weekends. Breakfast consumption at weekends is worth additional investigation to potentially inform PA promotion in adolescents.
I haven't read it line-by-line yet, but after scanning the article, it looks like the researchers reported all the relevant details and knew what they were doing. Field studies don't get much better than this. This also seems like a great study to review the common statistical methods used in medical and nutritional studies. A lot of information implies a research background for the reader, which makes it hard to interpret. Here's a start with that, so anyone can interpret the boring "middle section" of research articles: Mean +/- SD (#,#): Whenever you see a number in () following a statistical metric definition, the numbers are those things. So the average age of the kids was 14.5, and the standard deviation among the group was 6 months. You can use this information and apply it to the 68-95-99.7 rule. So, with that little bit of info you can know that 68% of the kids were between 14-15y.o.; 95.5% of then where between 13.5-15.5 y.o.; and, 99.7% were between 13-16 y.o. The same math applies to the % that ate breakfast, etc... P<0.05: Basically, anything equal to, or less than, 0.05 is the standard minimum level of significance needed to get a study published. What this literally means is that there's only a 5% chance that the results of a statistical test could have happened by random chance. When a million dollars is on the line, 90% confidence (p=0.1) is significant for many business decisions; meaning that p-value significance is subjective. Drug trials usually require 0.001 levels of significance or more when it comes to the chance of life threatening side-effects (1 person dies in a 1000). Here again, it's relative to the risk of being wrong. Also, if you see different values for P in a study, like this one, you can interpret it as the researchers just showing off. The p-value is chosen before doing the test, not after. So, the results are either sig. or they're not. Since this study pre-set the alpha to any p-value equal or less than 0.05, reporting any p-value other than p<0.05 is meaningless. A p<0.001 is not more significant in this context. It's fun to do, though. B or Beta>0.2: You rarely ever see this, which is too bad. The p-value is the "alpha" in the basic test, meaning that it's what protects you from saying there's a sig. diff. when there isn't one. The beta tells you the statistical "power" of the test. It protects you from saying that something something isn't sig. when it really is. The standard minimum beta is 80%. If you have an alpha of <0.05 (95% Confidence), and a beta of >0.2 (80% Power), then you can be really confident in the results of the statistical test. If you ever see two studies that contradict each other, and one reports the beta and the other one doesn't, then all other things being equal trust the one reporting the beta. (95% CIs): # (#, #): CI stands for "Confidence Interval". If a paper reports the p-value, but not the CI then there's a good chance that the researchers either: 1. Have minimal training to do what they're doing. 2. Are lazy. 3. Are hiding something. You can use the same logic used for the +/- SD explained above for CIs. To understand what information the CI is giving you, you first have to understand that any Mean/Avg for a sample of the population can only be an estimate of the actual Mean of the total population. I this case it would be for every child in the UK around 14.5 y.o., maybe every kid on earth. A 95% CI, means that you can be 95% confident that the true mean in the population is somewhere within that interval. The two things to look for: 1. Everything else equal, a narrower CI is a more precise estimate than a wider one. 2. By definition if the CI range includes zero, it is insignificant. So, if they're measuring the difference between two options (not/eating breakfast), and the CI for the effect of the change is .82-1, then it's statistically significant. But, if that's the estimated number of lbs lost over 6 months, then so what? So, in this context you can judge the difference by how far above or below the CI is from zero. If they don't report a CI, it might be because the change was inconsequential, even though it was significant. ANOVA: Stands for, Analysis of Variance. This is used to determine whether there's a difference between at least 2 test groups out of 3 or more. Implied whenever you see "ANOVA": There are 3 or more groups, with a minimum of 7 observations/measurements for each group. If you've got 3 or more groups, you do an ANOVA to tell you if there's a difference among any of them. If the ANOVA is insignificant, then there's no point in further tests between any group pairs. Logistic regression: Logistic regressions are mostly used in medical research. On one side of the regression you have a binary variable. In this study it's whether they ate breakfast or not. On the other side of the equation they have the other variables like calories, sex, SES, weekend or weekday, and daily activity. After running the regression they get two things: 1. OR (Odds Ratio): This tells you how much the chance of the binary variable changes with every unit of change in the independent variables. In this case it reflected the change in odds of being a breakfast consumer per hour increase of activity as measured by both heart monitors and movement trackers. So in this study, controlling for the other variables, for every hour of increased activity for boys on the weekend the chances that they ate breakfast goes up 1.78 times, or a 78% increase. The CI is between a 1.3x to 2.45x increase.2. Exponentiated β coefficients: Same as the Odds Ratio for those variables.Sample Size: The issue of sample size is a common issue, but usually not for the right reasons. More goes into selecting and judging sample sizes than can go here, but there are some general things to consider: 1. Categorical variables require bigger samples, ratio/interval variables need smaller samples. Basically, if the measurements used have decimals (ex: 2.6), then a sample of 30 can be plenty. But, national polling of yes/no answers tend to need samples of a couple thousand people.2. The sample size needed is directly proportional to how certain you want to be given a given change. If a sample size is big enough, finding significant relationships or differences are guaranteed. So, large samples can be used unethically. The smaller the difference or relationship there is in the population, the bigger the sample you need to find it.3. Sample size is also proportional to the number of variables in the mix. In the case of regressions a rule of thumb is that for every variable added, you need about 25 more measurements to be safe. For this study there are multiple variables tested against each other in different combinations, using a very broad cross-section of the teen population. Here a larger sample is needed, and 860 is plenty. 3. If you have a smaller sample that shows a significant relationship, then the relationship tends to be stronger. 4. That said, if the mean differences are large and significance isn't found, then the relationship is less certain and a much large sample would be needed to find it. The same is true of time. You need more time to find sig. with a smaller sample, and the smaller the effect the longer you need to get enough data to find it. 5. A small sample of around 40 homogenous people randomly assigned to two groups that doesn't find significance is more certain that the relationship isn't solid for that group, than the same sized sample of randomly sampled people in the full population.6. Basically, don't worship the p-value of 0.05, and don't dismiss a study because of a small sample. If a small sample shows a statistical relationship than it's a bigger deal than if a larger sample does so. If you use the median instead of the mean, you can get to 90% confidence with just 5 randomly sampled measurements. 7. A bunch of smaller sampled studies showing significance and large effect-sizes are usually better evidence than one big study.
-
AuthorPosts
You must be logged in to reply to this topic.