The study by Murawski et al. (2018) is a quantitative study that was designed to assess the effectiveness of using a mobile app to treat lifestyle chronic diseases by improving on physical activity and sleep health. Quantitative research methods focusses on the measurements and numerical analysis of data collected through the use various research instruments search as questionnaires through data manipulation using computational methods (LoBiondo-Wood, Haber, Berry, & Yost, 2013). A quantitative method in addition to a randomized control trial (RCT) was the most appropriate for the study because the nature of the study required actual data (Bernard & Bernard, 2012) to determine the effectiveness of an m-health intervention programme. The aim of this paper is to critique the study by Murawski et al. (2018) using the CONSORT guidelines.
The authors have reported having used a two-arm randomised control trial which encompasses exercise and sleep intervention and a placebo group. A two-arm trial in RCT involves group of subject’s receiving treatment or not in which the group subjects are randomly selected and administered with the relevant therapies and the results compared. The study consisted of two groups (intervention and waitlist control group) thus, a two-arm trial best fitted this study (Suresh, 2011). The use of a two-arm trial was also essential in the study as it allow researcher to compare the findings from each groups and study and to make informed inferences (Kang, Ragan, & Park, 2008).
The allocation ratio is 1:1 representing 80 participants in each group. The study does not identify any significant changes made to the method after the initiation of the trial except for the eligibility criteria that is provided. This implies that there was only one trial. This limits the completeness of the study and reduces its viability (Boutron, Moher, Altman, Schulz, & Ravaud, 2008). But on the other hand, equality in the sample size in the two groups under investigation ensures that the findings are not biased and they can equally be compared. However, the report on the sample size is critical as it enables the potential audience to evaluate the diversity of the perspectives in the study.
The authors have provided eligibility criteria by using exclusion criteria. For instance, the eligible participants had to be residents of Australia, not between 18 and 55 years among others. Only the eligible participants were reached via email and introduced to the survey while those that never qualified received a link to the open and unrestricted access to the balanced app. The inclusion of detailed information regarding the eligibility of the participants is significant as it is an indication to the readers that the findings are based on the most qualified participants hence reliable. The exclusion criteria also reduces bias by the researcher who may influence the type of respondents to be included in the study Furthermore, the eligibility criteria minimise the possibility of making statements that are not supported (Cohen & Crabtree, 2008).
The researchers report that because of the nature of the distant delivery of the intervention in addition to self-reported evaluations, it was not a requirement for the participant to physically report to the research centre. The description of the setting in which data was gathered is important as it shows the factors that might have influenced the responses. The intervention for each group is detailed enough including the time and method of implementation. For instance, the authors report that the intervention comprised of non-app and app elements. The app elements included goal-setting, educational resources, and self-monitoring whereas the non-app elements included all the elements of the intervention that were administered via email, text message or through the handbook. The assessments were done after three months and six months. Baxter and Jack (2008) assert that the provision of comprehensive details of an intervention process enables replication which further increases the credibility of the findings.
The study provides comprehensive definitions of the pre-defined outcome measures in addition to how they will be evaluated. The researchers divide the outcomes into process results, primary outcomes, secondary outcomes, and mediators and moderators. Moreover, the authors provide detailed descriptions of the measurement instruments that are to be used in the assessment of each sub-themes of the major themes. For instance, internet self-efficacy under process outcomes will be measured using the Internet Self-Efficacy Scale to record the comprehension of the respondents about the app software and trust in the app in collecting data alongside its use. The participant’s responses will be rated using a Likert scale. The provision of comprehensive details on measurement instruments of the sub-themes does not only allow replication but also increases the rigour of the findings and interpretations (Noble & Smith, 2015).
The authors indicate that changes to the initial outcomes of the trial included adjustments of the initially set goals by the participants about their findings of the assessments of their achievements. The authors further provide reasons for this changes by asserting that such adjustments will ensure that the objectives are in line with the latest development and promote individual efficacy. Diligence reporting of any differences in the outcomes after the commencement of the trial increases transparency of reporting and reduces any possible biasness, and as a result validity and reliability of the study is strengthened (Noble & Smith, 2015).
The researchers have also reported on the procedure used in sample size determination. An alpha of 0.025 was determined based on the measurement of two primary results; MVPA and sleep quality. An average effect size with d=0.45 for physical activity, d= 0.65 for sleep and 60 participants for physical activity in each group was required, and for sleep quality, 35 respondents were needed in each group. Based on the meta-analyses studies on physical activity and sleep interventions, an average of 20% drop-out rates have been reported However, the larger the sample size, the better. Comprehensive reporting on the procedure of determining a sample size is important as it determines the number of subjects to be include in the analysis and thus the quantity of data to be collected. As a result, the readers have high confidence level on the outcomes (Ho et al., 2015).
On the other hand, most of the drop-out reports of web-based trials indicate higher drop-out rates (Davies, Spence, Vandelanotte, Caperchione, & Mummery, 2012). The authors inflated the sample size of the study to account for a 25% drop-out. This is based on the unavailability of information on attrition in m-health interventions. As a result, the authors settled on a sample size of 80 respondents for each of the two groups. The authors justify that a sample size of 80 will be sufficiently powered to identify mediated effect sizes of little effect (β=0.14) (Fritz, & MacKinnon, 2007). The detailed explanation of sample size determination allows the readers to evaluate the diversity of the opinions incorporated in the study.
The study used randomisation to allocate its participants to the experimental and placebo groups after the conclusion of the baseline evaluation. However, the authors do not mention the specific randomisation technique used. Permuted block randomisation including the use of block sizes of eight and four were used depending on the procedures provided by Moher et al. (2012). The permuted block randomisation was also used to prepare the block restrictions and block sizes. The use of permuted block randomization approach best fitted the nature of the study which had equal sample sizes in the two groups. Thus, the randomization approach will ensure that there is equality in allocation of subjects to the two groups and consequently in the analysis of the results. Procedures for allocation concealment technique have also been adequately explained. For instance, envelopes that are sealed will be made by BM and an independent researcher taxed with the role of group allocation will open the opaque sealed envelope and notify the project leader and the group members regarding the results of the assignment. Then the project leader will inform the respective participants and then sent the study materials for the experimental group, but the waitlist group will receive the materials at the end of the sixth-month evaluation. None of the respondents was blinded to the assignment of the group including the trial participants except for the analyses of primary results that were blinded to group allocation by a statistician not involved in the research.
The study reports the various measures used to assess the outcomes. An online survey will be used in the assessment of all measures at baseline three and six months. The three-month study comprises the elements of evaluation which measure the usability of the system and respondent satisfaction in the experimental group only. Primary and secondary outcomes were also assessed using different sub-themes which were measured using various statistical methods whose effectiveness had been proved in previous studies. The differences between groups were assessed using Generalised Linear Mixed Models (GLMM) in both physical activity and sleep quality. Adjustments will be made for baseline values of the findings. The repeated measures on each participant will be accounted for using a random intercept. The authors also used Pattern Mixture Modelling for sensitivity analyses.
The authors have also used a flow diagram of the contributors in the synergy survey which shows the number of respondents for each group who were assigned randomly, were administered with the appropriate intervention and analysed for potential outcomes. The use of flow diagrams makes it easy for readers to assess the procedures involved in the recruitment, analyses and presentation. This increases the possibility of replication of the study. The study provides the date of recruitment to be May 2017 with its termination being dependent on the achievement of the required sample size of 160. A follow-up assessment will be conducted after six months. The study does not, however, provide for baseline demographic and clinical features of each category except for tables on the Social Cognitive Theory Constructs and an overview of the outcome measures and evaluation points. It is essential that the duration of recruitment and data collection be reported as this affects the amount of data collected (Schulz, Altman, & Moher, 2010).
The expected outcomes and estimations of the two groups have been discussed under primary, secondary, process outcomes and mediators and moderators. The estimations of the effect size of sleep quality have been based on the meta-analyses of non-pharmacological sleep interventions that show small to medium effect sizes (Hedge’s g=0.35 and Cohen’s d=0.41) (Ho et al., 2015). Additionally, the studies by 97 which use exercise to enhance sleep and reported medium to large effect sizes (d=0.74) were also used as a basis for the estimation of the effect sizes for the study. Therefore, the authors used a moderate effect size for sleep (d=0.65) and for physical activity (d=0.45) on the assumption that alpha =0.025 (because of assessing two primary findings, sleep quality and MVPA). The limitation of the study is in the use of mobile interventions which is commonly known for high rates of attrition. This affected the sample size since it had to be inflated to cater for 25% participant dropout due to insufficient information on attrition. Furthermore, the study lacked an intervention arm that can singly receive either physical activity or sleep intervention.
The prevalence of chronic diseases in the adult population worldwide calls for the most appropriate intervention of reducing physical inactivity and low quality of sleep health. A personalised mobile app intervention proposed by Murawski et al. (2018) is most appropriate for reducing physical inactivity and increasing sleep health because it has been tested by the researchers and its reliability and validity demonstrated throughout the various standardised instruments used to measure the different construct. The findings of the study are essential to professional health because it is the first known study to assess the efficacy of mobile health intervention focusing on simultaneously combining sleep health and physical activity. The findings of the study will also add knowledge to the prevention of chronic diseases by using an approach to behavioural change that examines more than one variable using a mobile intervention which can reach a wide population. The outcomes of this study will also act as a basis for future studies on behavioural change interventions that are technology-based.
References
Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and implementation for novice researchers. The qualitative report, 13(4), 544-559.
Bernard, H. R., & Bernard, H. R. (2012). Social research methods: Qualitative and quantitative approaches. Sage.
Boutron, I., Moher, D., Altman, D. G., Schulz, K. F., & Ravaud, P. (2008). Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: explanation and elaboration. Annals of internal medicine, 148(4), 295-309.
Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in healthcare: controversies and recommendations. The Annals of Family Medicine, 6(4), 331-339.
Davies, C. A., Spence, J. C., Vandelanotte, C., Caperchione, C. M., & Mummery, W. K.
(2012). Meta-analysis of internet-delivered interventions to increase physical activity levels. International Journal of Behavioral Nutrition and Physical Activity, 9(1), 1.
Fritz, M. S., & MacKinnon, D. P. (2007). Required sample size to detect the mediated effect. Psychological science, 18(3), 233-239.
Gallicchio, L., & Kalesan, B. (2009). Sleep duration and mortality: a systematic review and meta?analysis. Journal of sleep research, 18(2), 148-158.
Ho, F. Y. Y., Chung, K. F., Yeung, W. F., Ng, T. H., Kwan, K. S., Yung, K. P., & Cheng, S.
K. (2015). Self-help cognitive-behavioral therapy for insomnia: a meta-analysis of randomized controlled trials. Sleep medicine reviews, 19, 17-28.
Kang, M., Ragan, B. G., & Park, J. H. (2008). Issues in outcomes research: an overview of randomization techniques for clinical trials. Journal of athletic training, 43(2), 215-221.
LoBiondo-Wood, G., Haber, J., Berry, C., & Yost, J. (2013). Study Guide for Nursing Research-E-Book: Methods and Critical Appraisal for Evidence-Based Practice. Elsevier Health Sciences.
Moher, D., Hopewell, S., Schulz, K. F., Montori, V., Gøtzsche, P. C., Devereaux, P. J., … & Altman, D. G. (2012). CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. International Journal of Surgery, 10(1), 28-55.
Murawski, B., Plotnikoff, R. C., Rayward, A. T., Vandelanotte, C., Brown, W. J., & Duncan, M. J. (2018). Randomised controlled trial using a theory-based m-health intervention to improve physical activity and sleep health in adults: the Synergy Study protocol. BMJ open, 8(2), e018997.
Noble, H., & Smith, J. (2015). Issues of validity and reliability in qualitative research. Evidence-Based Nursing, ebnurs-2015.
Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMC medicine, 8(1), 18.
Suresh, K. P. (2011). An overview of randomization techniques: an unbiased assessment of outcome in clinical research. Journal of human reproductive sciences, 4(1), 8.
World Health Organisation. (2017). Physical activity: Fact sheet. Geneva: World Health Organization. Retrieved from file:///C:/Users/Admin/Downloads/Documents/WHO-Fact-Sheet-PA-2015.pdf
Essay Writing Service Features
Our Experience
No matter how complex your assignment is, we can find the right professional for your specific task. Contact Essay is an essay writing company that hires only the smartest minds to help you with your projects. Our expertise allows us to provide students with high-quality academic writing, editing & proofreading services.Free Features
Free revision policy
$10Free bibliography & reference
$8Free title page
$8Free formatting
$8How Our Essay Writing Service Works
First, you will need to complete an order form. It's not difficult but, in case there is anything you find not to be clear, you may always call us so that we can guide you through it. On the order form, you will need to include some basic information concerning your order: subject, topic, number of pages, etc. We also encourage our clients to upload any relevant information or sources that will help.
Complete the order formOnce we have all the information and instructions that we need, we select the most suitable writer for your assignment. While everything seems to be clear, the writer, who has complete knowledge of the subject, may need clarification from you. It is at that point that you would receive a call or email from us.
Writer’s assignmentAs soon as the writer has finished, it will be delivered both to the website and to your email address so that you will not miss it. If your deadline is close at hand, we will place a call to you to make sure that you receive the paper on time.
Completing the order and download