Academic confidence and dyslexia at university

Research

 

Section 3 revised March 2020

Research Design - Methodology and Methods

 

3.1 Research Design Overview

The research aim was twofold: firstly, to establish the extent of all participants' 'dyslexia-ness', this to be the independent variable; secondly, to gauge their academic confidence in relation to their studies at university, the dependent variable. This section describes the strategic and practical processes of the project that were planned and actioned. Details are provided about how practical processes have been designed and developed to enable appropriate data sources to be identified; how the research participants were identified and contacted; how data has been collected, collated and analysed so that the research questions can be properly addressed. The rationales for research design decisions are set out and justified, and where the direction of the project has diverted from the initial aims and objectives, the reasons for these changes are justified. This includes the elements of reflective processes that have underpinned project decision-making and re-evaluation of the focus of the enquiry where this has occurred.

Design Focus

The project has taken an explorative, mixed methods design focus. This is because firstly, little is known about the interrelationships between academic confidence and dyslexia. Hence, no earlier model has been available to provide guidance. Secondly, although data collected were mainly quantitative - Likert scale item responses were transformed into numerical data for analysis - some qualitative data were also collected through a free-writing area included in the research questionnaire. In this way, hypotheses generated from the research questions were addressed objectively using the outputs from the statistical analysis of the quantitative data, with free-writing comments from participants intended to provide more depth to discussion points later. This is reported fully In Section 4, Results and Analysis.

 

3.2      Research participants: Groups and Subgroups

The participants were all students at university and no selective nor stratified sampling protocols were used in relation to gender, academic study level or study status - that is, whether an individual was a home or overseas student. However, all three of these parameters were recorded for each respondent, and these data have been used throughout the analysis and discussion when considered apposite. It is possible that a later study may consider re-visiting the data to explore differences that may emerge through stratified analysis.

The objective was to establish a sizeable research datapool through convenience sampling that comprised two groups: the first was to be as good a cross-section of HE students as may be returned through voluntary participation in the project. Participants in this group were recruited through advertisements posted on Middlesex University’s student-facing webpages during the academic year 2015-16. The second group was to be students known to have dyslexic learning differences. These were recruited through the University’s Dyslexia and Disability Service student e-mail distribution list. Recruitment was incentivized by offering participants an opportunity to enter a prize draw subsequent to completing the questionnaire. Amazon vouchers were offered as prizes. From the group of non-dyslexic students, it was hoped that a subgroup of students presenting quasi-dyslexia could be identified. It was of no consequence that students with dyslexia may have found their way to the questionnaire through the links from the intranet rather than as a response to the Disability and Dyslexia Service's e-mail, because the questionnaire requested participants to declare any dyslexic learning challenges. Hence, participants would be assigned into the appropriate research group from either recruitment process.

 

Thus, three distinct datasets were established:

  • Students with known dyslexia - designated Research Group DI;

  • Students with no known dyslexia - designated Research Group ND;

    • students with quasi-dyslexia - designated Research Group DNI, being a subset of Research Group ND.

 

Hence, it was possible to compare levels of academic confidence between the three groups.

 

3.3  Data Collection

I - Objectives

As this project is focused on finding out more about the academic confidence of university students and relating this to levels of dyslexia-ness, the data collection objectives were:

  • to design and build a data collection instrument that could gather information about academic confidence and aspects of dyslexia-ness, expediently and unobtrusively from a range of university students, in information formats that could easily be collated and statistically analysed once acquired;

  • to ensure that the data collection instrument was as clear, accessible and easy-to-use as possible, noting that many respondents would be dyslexic;

  • to ensure that the data collection instrument could acquire information quickly (15 minutes was considered as the target) to maintain research participant interest and attention;

  • to design an instrument that could be administered online for participants to engage with at their convenience;

  • to enable participants to feel part of a research project rather than its subjects, and hence engage with it and provide honest responses;

  • to maximize response rates and minimize selection bias for the target audience;

  • to ensure compliance with all ethical and other research protocols and conventions for data collection according to guidelines and regulations specified by the researcher's home university.

 

These objectives were met by designing and building a self-report questionnaire. Carefully constructed survey questionnaires are widely used to collect data on individuals' feelings and attitudes that can be easily numericized to enable statistical analysis (Rattray & Jones, 2007). Questionnaires are one of the most commonly used processes for collecting information in educational contexts (Colosi, 2006). This rationale falls within the scope of survey research methodology in which the process of asking participants questions about the issues being explored are a practical and expedient process of data collection, especially where more controlled experimental processes such as might be conducted in a laboratory, or other methods of observing behaviour are not feasible (Loftus et al., 1985). Evidence shows that self-report questionnaires have been found to provide reliable data in dyslexia research (e.g.: Tamboer et al., 2014; Snowling et al., 2012). Developments in web-browser technologies and electronic survey creation techniques have led to the widespread adoption of questionnaires that can be delivered electronically across the internet (Ritter & Sue, 2007) and so this process was used. The ability to reach a complete community of potential participants through the precise placement and marketing of a web-based questionnaire was felt to have significant benefits. These included:

  • the ability for the researcher to remain inert in the data collection process to reduce any researcher-induced bias;

  • the ability for respondents to complete the questionnaire privately, at their own convenience and without interruption, which it was hoped would lead to responses that were honest and accurate;

  • ease of placement and reach, achieved through the deployment of a weblink to the questionnaire on the home university's website;

  • ease of data receipt using the standard design feature in online surveys of a 'submit' button to generate a dataset of the responses in tabular form for each participant, automatically sent by e-mail to the researcher's university mail account;

  • the ability to ensure that participant consent had been obtained by linking agreement to this to access to the questionnaire;

  • the facility for strict confidentiality protocols to be applied whereby a participant's data, once submitted, were to be anonymous and not attributable to the participant by any means.

 

Every questionnaire response received was anonymised at the submission point with a randomly generated 8-figure Questionnaire Response Identifier (QRI). The QRI was automatically added to the response dataset by the post-action process for submitting the form as an e-mail. Should any participant subsequently request revocation of data submitted, this was achieved by including the QRI in the revocation request form, also submitted electronically and received anonymously. In fact, no participants requested this.

II - Questionnaire design rationales

The questionnaire was designed to be as clear and as brief as possible. Notably, guidance provided by the British Dyslexia Association was helpful in meeting many of the design objectives, with other literature consulted about designing online and web-based information systems to ensure better access for users with dyslexia. Particular attention was paid to text formats and web design for visually impaired and dyslexic readers, to improve readability to assist with dyslexia compliance (Gregor & Dickinson, 2007; Al-Wabil et al., 2007; Evett & Brown, 2005).

Secondly, a range of later research was consulted to explore how dyslexia-friendly online webpage design may have been reviewed and updated in the light of the substantial expansion over the last two decades of online learning initiatives. These have developed within HE institutions through virtual learning environments (VLEs) and digital learning object platforms such as Xerte (Xerte Community, 2015) and Articulate (Omniplex Group, 2018), and from external sources such as MOOCs and free-course providers such as FutureLearn (Open University, 2018), all of which rely on modern web-browser functionality (Rello et al., 2012; Chen et al., 2016; Berget et al., 2016).

 

Additionally, literature was consulted to understand how the latest HTML5 web technologies and the rapid rise in smart mobile device use were influencing universal web design (Riley-Huff, 2012; 2015; Henry et al., 2014; Fogli et al., 2014; Baker, 2014). It was apparent that online information presentations which enshrined strong accessibility protocols not only enabled better access for those with dyslexia, or who experienced visual stress or other vision differences, but provided better accessibility and more straightforward functionality for everyone (McCarthy & Swierenga, 2010; Rello et. al, 2012; de Santana et.al., 2013). Other literature was consulted for guidance about the impact of design and response formats on data quality (Maloshonok & Terentev, 2016), on response and completion rates (Fan & Yan, 2010), on the effectiveness of prize draw incentivizations (Sanchez-Fernandez et al., 2012) and invitation design (Kaplowitz et al., 2011), and about web form design characteristics recommended for effectiveness and accessibility (Baatard, 2012).

 

Part of the questionnaire design stage included a review of existing web survey applications to determine if any provided sufficiently flexible design customizability to meet the design specifications that had been scoped out. Applications (apps) reviewed were Google Forms (Google, 2016), SurveyMonkey (Survey Monkey, 2016), SurveyLegend (Survey Legend, 2016), Polldaddy (Automattic, 2016), Survey Planet (Survey Plant, 2016), Survey Nuts (Zapier Inc., 2016), Zoho Survey (Zoho Corp., 2016) and Survey Gizmo (Widgix, 2016). It was found that the limitations of these proprietary survey design apps were numerous and broadly similar, such as: limited number of respondents per survey; strictly constrained design and functionality options; advertising, or custom branding. These features were only removable by subscribing to payment plans. None of the apps reviewed included the functionality of range input sliders.

 

Hence the project questionnaire was designed within these design rationales:

  • it was an online questionnaire that rendered properly in at least the four most popular web-browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Safari (usage popularity respectively 69.9%, 17.8%, 6.1%, 3.6%, data for March 2016 (w3schools.com, 2016)). Advice was provided in the questionnaire pre-amble that these were the best web-browsers for viewing and interacting with the questionnaire; links were provided for downloading the latest versions of the two most popular browsers;

  • text, fonts and colours were carefully chosen to ensure that the questionnaire was attractive to view and easy to engage with, meeting W3C Web Accessibility Initiative Guidelines (W3C WAI, 2016);

  • an estimate was provided about completion time (15 minutes);

  • questions were grouped into five, short sections, each focusing on a specific aspect of the research, with each question-group viewable one section at a time. This was to attempt to reduce survey fatigue and poor completion rates (McPeake et al., 2014; Ganassali, 2008; Flowerdew & Martin, 2008; Marcus et al., 2007; Cohen & Manion, 1994); In the event, only 17 of the 183 questionnaires returned were incomplete (9.2%).

  • the substantial part of the questionnaire used Likert-style items in groups, presenting response options using range sliders to gauge agreement with the statements;

  • the questionnaire scale item statements were written as neutrally as possible, or in instances where this was difficult to phrase, a blend of negative and positive phrasing was used (e.g.: Sudman & Bradburn 1982). This was an attempt to avoid tacitly suggesting that the questionnaire was evaluating the impacts of learning difficulty, disability or other learning challenge on studying at university, but rather that a balanced approach was being used to explore a range of study strengths as well as challenges. Account was taken of evidence that wording polarity may influence respondents' answers to individual questions with 'no' being a more likely response to negative questions than 'yes' is, to positively worded ones (Kamoen et al., 2013); but that the widely claimed supposition that survey items worded negatively as an attempt to encourage respondents to be more attendant to them, or that mixing item polarity may be confusing to respondents, claimed through internal reliability analysis, was dubious at best (Barnette, 2000). Hence applying scale item statement neutrality where possible, was considered as the safest approach for minimizing bias;

  • a free-writing field was included to encourage participants to feel engaged with the research by providing an opportunity to make further comments about their studies at university in whatever form they wished. This had proved to be a popular feature in the preceding dissertation questionnaire (Dykes, 2008), providing rich, qualitative data;

  • on submission, an acknowledgement (receipt) page was triggered, a copy of the responses submitted could be inspected, and revocation of the data could be requested if desired;

III - Questionnaire components

After a short preamble about the project, instructions for completing and submitting the questionnaire were provided. The questionnaire comprised three main sections: The first presented demographic data fields that all participants were to complete. The second section comprised quantitative data collection fields to explore academic confidence and dyslexia-ness. The final section collected qualitative data. Closing statements at the end of the questionnaire included the 'submit' button.

Demographic data

 

Data was collected on gender, student domicile ('home' or 'overseas') and student study level, ranging from Foundation (Level 3/4 (UK Quality Code for Higher Education (QAA, 2014)) to post-doctoral researcher (Level 8 (ibid)). This section also asked students with dyslexia how they learned of their dyslexia, thus collecting data to address Hypothesis 3. For this, dyslexic students chose options from two drop-down menus to complete a sentence (Fig. 8).

 

 

 

 

 

 

 

 

Quantitative data

 

This section comprised firstly the ABC Scale verbatim (Sander & Sanders, 2006, 2009; see Appendix 8.2(II)). This was followed by a set of 60 statements designed to gauge dyslexia-ness. At the time of the design and development of the data collection process, this section comprised a set of six subscales of six statements each, aiming to explore the psycho-social constructs of Learning Related Emotions; Anxiety Regulation and Motivation; Self-Efficacy; Self-Esteem; Learned Helplessness; and Academic Procrastination. In combination, these were intended to form the discriminator for gauging levels of dyslexia-ness, derived from visual discrimination between the radar chart profiles generated from the 6 subscales (Fig 9). This was drawn from the literature review, rationales and processes from the earlier dissertation (Dykes, 2008). However, in the light of an early simulation exercise, (see Appendix 8.1, and Fig 9), it was considered unlikely that sufficient discriminative power would emerge, and although developing a process for numericizing the visualizations was explored, it was dismissed as too complex and unwieldy.

Figure 8: Selecting how dyslexic students learned of their dyslexia

Figure 9: The radar chart visualization of the 6 psycho-sociometric constructs generated froma trial pseudo-respondent

However, the idea for profile visualizations was retained and subsequently used as an illustrative feature of the factor analysis outputs later. Although retained in the questionnaire, data collected through the six psycho-social subscales have not been incorporated into the project's data analysis process, due to this unease about the reliability of the profiles. Analysis of the data from these 6 subscales, presently reserved, may form part of a subsequent study. Hence an additional metric had to be developed and incorporated into the questionnaire which more directly assessed dyslexia-ness through the lens of study-skills and academic learning management attributes. Thus, the 24-item Dyslexia Index (Dx) Profiler was developed and added to this section of the questionnaire. Inspection of the collated data indicated that the research hypotheses could be adequately addressed through data generated from the Dx Profiler and the ABC Scale.

Likert Scales

Likert-style scales were used to collect quantitative data throughout the questionnaire.  Participants reported their degree of agreement with each scale-item statement using a continuous response scale approach, developed for this project in preference to traditional, fixed anchor point scale items. When conventional, fixed anchor points are used - commonly 5- or 7-points - the data produced are numerically coded so that they can be statistically analysed. There is some debate about whether data coded in this way justifies parametric analysis because the coding process assigns arbitrary numerical values to non-numerical data collection responses, hence generating a discrete variable. This is usually an essential first stage of the data analysis process, but one which then makes the data neither authentic nor actual (Carifio & Perla, 2007; Carifio & Perla 2008; Ladd, 2009). However, the issue about conducting parametric analysis on data generated from Likert-style scales remains controversial, aggravated by a tendency amongst researchers to not clearly demonstrate their understanding of the differences between Likert-style scales and Likert-style scale items (Brown, 2011), compounded by not properly clarifying whether their scales are gauging nominal, ordinal, or interval (i.e. continuous) variables. Hence by using range sliders, data collected would be as close to continuous as possible, thus enabling parametric analysis to be reasonably conducted. (Jamieson, 2004; Pell, 2005; Carifio & Perla, 2007; 2008; Grace-Martin, 2008; Ladd, 2009; Norman, 2010; Murray, 2013, Mirciouiu & Atkinson, 2017). In this questionnaire, the continuous scales were set as percentage agreement with each statement, ranging from 0% to 100%.

Qualitative Data

The final part of the questionnaire collected qualitative data in an optional, unlimited free-writing area. Participants were invited to comment on any aspects of the research, the questionnaire, or their learning experiences at university more generally.

 

IV - Metrics

Gauging academic confidence

Academic confidence was assessed using the existing, ABC Scale because there is a range of research that has found this to be a good evaluator of academic confidence presented in university-level students' study behaviours (see Section 2.2). Secondly, no other metrics have been found that explicitly focus on gauging confidence in academic settings (Boyle et al., 2015). There are evaluators that measure self-efficacy or academic self-efficacy, which, as also described in Section 2, is considered to be the umbrella construct that includes academic confidence. Of all such measures, the ABC Scale most closely matched the research objectives of this study. The only modification applied was the replacement of the 5-anchor-point Likert responders in the published ABC Scale with continuous range sliders.

Gauging dyslexia-ness

By quantifying dyslexia-ness, the Dx Profiler has aimed to find apparently non-dyslexic students who present dyslexia-like study profiles, that is, quasi-dyslexia, so that these students' academic confidence could be compared with that of students who have disclosed identified dyslexia. Designing a mechanism to identify the subgroup of quasi-dyslexic students has been one of the most challenging aspects of the project. It is important to emphasize that the purpose of the Dx Profiler, as developed for this study, is not to identify dyslexia in students explicitly, although it is possible that some of the quasi-dyslexic students revealed by the profiler may have unidentified dyslexia. The main focus of this study Is finding out whether levels of academic confidence are influenced differently by dyslexia or quasi-dyslexia, and it may be possible to link differences that may emerge to whether a dyslexic student has been identified as such, or not, and possibly that the dyslexic label may be a significant impacter.

It was considered important to develop an independent means for quantifying dyslexia-ness in preference to incorporating existing dyslexia 'diagnosis' tools for two reasons: firstly, an evaluation that used an existing metric for identifying dyslexia in adults would have been difficult to use without explicitly disclosing to participants that part of the project's questionnaire was a ‘test’ for dyslexia. It was felt that to do this may introduce bias, whereby respondents who were not (identified as) dyslexic may try to spot features of the questionnaire which they thought were testing for dyslexia and answer them untruthfully, not least through fear of being identified as dyslexic, and the implications that this may bring. Such fear is widely reported, in particular, amongst health professionals (e.g.: Shaw & Anderson, 2018; Evans, 2014; Ridley, 2011; Morris & Turnbill, 2007; Illingworth, 2005). To use such a metric covertly would be unethical. Secondly, and to meet the research aims of this project, it has been important to use a scale which encompasses a broader range of study attributes and behaviours than those specifically, and apparently, impacted by literacy challenges. This is not least because many students with dyslexia at university, partly by virtue of their academic capabilities, may have developed strategies to compensate for literacy-based difficulties experienced in earlier learning histories (see Section 2). But also, because in HE contexts, it has been shown that other aspects of the dyslexic self can impact significantly on academic study, and that it may be a mistake to consider dyslexia to be only a literacy issue, or to focus on cognitive aspects such as working memory and processing speeds (Cameron, 2015). Processes which enable effective self-managed learning strategies to be developed need to be considered (Mortimore & Crozier, 2006), especially as self-regulated learning processes are recognized as a significant feature of university learning experiences (Zimmerman & Schunk, 2011; Broadbent & Poon, 2015).

Development of the Dx Profiler has been a complex process that built on two, small pilot studies in addition to pertinent theory about the nature of dyslexia. A detailed report on its development provided in the appendices (Appendix 8.1). The decision to report on the Dx Profiler in this way has been taken because this thesis is about how academic confidence is affected by dyslexia-ness, not about the development of the mechanism to evaluate dyslexia-ness. A subsequent project to write up this process more fully, perhaps to submit for publication, is being considered. However, in summary, the Dx Profiler has been developed to meet the following criteria:

  • it is a self-report tool requiring no administrative supervision;

  • the self-report item statements are as applicable to non-dyslexic as to dyslexic students;

  • it includes a balance of literacy-related and wider, academic learning-management evaluators;

  • it includes elements of learning biography;

  • although Likert-style based, scale item statements avoid fixed anchor points by presenting respondent selectors as a continuous range option;

  • scale item statements aim to minimize response distortions potentially induced by negative affectivity bias (Brief, et al., 1988);

  • scale item statements aim to minimize respondent auto-acquiescence, that Is, 'yea-saying', which is the often-problematic tendency to respond positively to attitude statements (Paulhaus, 1991). Thus, the response indicator design has required a fine gradation of level-judgment to be applied;

  • although not specifically designed into the suite of scale-item statements at the outset - which are presented in a random order - natural groupings of statements as sub-scales are expected to emerge through factor analysis later;

  • scale item statements are to demonstrate an attempt to avoid social desirability bias, that is, the tendency of respondents to self-report positively, either deliberately or unconsciously. In particular, an overall neutrality should be established for the complete Dx Profiler so that it would be difficult for participants to guess what are likely to be responses that would present them in a favourable light (Furnham & Henderson, 1982).

In addition to being designed to meet the data collection design of this study, these parameters emerged from a review of dyslexia self-identifying evaluators, in particular: the BDA's Adult Checklist developed by Smythe and Everatt (2001); the original Adult Dyslexia Checklist proposed by Vinegrad (1994), upon which many subsequent checklists appear to be based; and the later, York Adult Assessment (YAA) (Warmington et al., 2012) which has a specific focus as a screening tool for dyslexia in adults. Despite the limitations outlined earlier (sub-section 2.1(VII)), the YAA was found to be usefully informative. Also consulted and adapted has been the 'Myself as a Learner Scale' (Burden, 2000); the useful comparison of referral items used in screening tests which formed part of a wider research review of dyslexia by Rice and Brooks (2004); and especially more recent work by Tamboer and Vorst (2015) where both their own self-report inventory of dyslexia for students at university, and their useful overview of previous studies were consulted.

 

The final iteration of the Dx Profiler comprised 20 Likert-style item statements where each aimed to capture data relating to a specific study attribute or behaviour, or an aspect of learning biography. The statements were ordered randomly to reduce the likelihood of order-effect bias where the sequence of questions or statements in a survey may induce a question-priming effect. This is where a response provided for one statement or question subsequently influences the response for the following question when these appear to be gauging the same or a similar aspect of the construct under scrutiny (McFarland, 1981). Participants recorded their strength of agreement with each statement along a continuous range from 0% to 100%. For each participant, an aggregate was calculated and scaled to provide an output, Dyslexia Index (Dx), in the range 0 < Dx < 1000.

With results available post hoc, principal component analysis (PCA) applied dimensionality reduction to generate Dx Factors, although this analysis remains tentative and to an extent, speculative, because the size of the sample (n=98) from which it was generated is quite small. A later study would aim to develop the Dx Profiler by collecting data from larger and more varied samples, hence enabling PCA to be more confidently applied (see Section 4.##). Through this extensive development process, the Dx Profiler was considered to have met its design specifications and was used confidently to gauge the dyslexia-ness of the participants in the study.

Questionnaire pilot

The questionnaire was trialled amongst a small student peer-group (n=10) to gain feedback about its style of presentation, ease of use, the clarity of the questions and statements, the quality of the introduction, the length of time it took to complete, any issues that had arisen in the way it had displayed in the web-browser used, and to elicit any other comments that might Indicate that a review or partial review would be necessary before deployment to the target audience. The outcome of this pilot indicated that other than some minor wording changes, no amendments were required.

 
 

3.4 Data processing summary

Questionnaire responses were received by e-mail and identified from the data as being either submitted from a student with declared dyslexia or from a student with no declared dyslexia. Subsequently, raw data were transferred into Excel for initial inspection. A Dyslexia Index was calculated for each respondent using the weighted mean average process applied to the 20 scale-items, developed at the design stage of the Dx Profiler in the light of the analysis of the pilot study (see Appendix 8.1(I)). Students from the non-dyslexic group whose Dyslexia Index exceeded Dx = 592.5 were categorized as quasi-dyslexic (see sub-section 4.## for the discriminating rationale). Each respondent’s ABC score was calculated using a non-weighted mean average of the 24 scale-item responses which each offered a range from 0 to 100, leading to an output of 0 < ABC < 100. (Fig 9).

The complete datapool was transferred into SPSS v24 (IBM Corp, 2016) for further analysis. .

Fig11DataProcFlow.png

Figure 9:   Data processing flowchart.

 
THIS SUB-SECTION: Revise and move to Section 4

VI    Statistical tools and processes

Use of the t-test in preference to ANOVA

 

Through the adoption and adaptation of the ABC Scale and the careful design and development of the Dyslexia Index Profiler, both of these metrics are considered as continuous variables, being the dependent, independent measures respectively in this study. Although the datapool has been sifted into the three research subgroups described above, dyslexia-ness remained as a continuous variable across the complete datapool and thus, individual students' data response pairings across the two variables were preserved. This would enable a regression analysis to be considered later to determine whether there exists any predictive association between dyslexia-ness and academic confidence.

 

The focus of the data analysis in this enquiry has been to determine whether there exist significant differences in mean values of the dependent variable across the research subgroups. It is recognized that the application of ANOVA to this data may have been appropriate, although this process is recommended to be used usually when the independent variable is categorical in nature (Lund & Lund, 2016). In this current study, had dyslexia-ness been categorized into 'high', 'moderate' or 'low', or indeed sub-gradations of these, it can be seen that ANOVA may have been an appropriate statistical test to use (Moore & McCabe, 1999). However, it was felt that the relatively simpler Student's t-test would be a better choice for determining whether there exists or not, significant differences in (population) mean values of ABC where the continuously-valued Dyslexia Index is used as the independent variable.

 

In this way, a matrix of t-test outcome-pairs could be constructed which would identify significant differences not only between levels of ABC for the three research subgroups, but also at a factorial level both of ABC and of Dyslexia Index following a principal component analysis of both variables. It is recognized that the t-statistic used in the t-test forms the basis of ANOVA in any case where the required F-statistic in ANOVA is exactly equal to t2(squared). It is possible that this analysis decision may be reconsidered perhaps as a recommended project development by redefining Dyslexia Index as a categorical variable and establishing clear, categorical boundaries containing ranges of dyslexia-ness that could be assigned such categories as 'low', 'low-to-moderate' ... etc. In this way, an ANOVA would then be an appropriate statistical analysis to perform.

Effect sizes

 

Effect size challenges the traditional convention that the p-value is the most important data analysis outcome response to determine whether an observed effect is real or can be attributed to chance events (Maher et al., 2013). The use of effect size as a method for reporting statistically important analysis outcomes is gaining traction in education, social science and psychology research (Rollins, et al., 2019), not least in studies about dyslexia, where it is claimed to be a vital statistic for quantifying intervention outcomes designed to assist struggling readers (ibid).

 

Effect size values are a measure of either the magnitude of associations or the magnitude of differences, depending on the nature of the data sets being analysed. Effect size is easy to calculate, indeed the simplest result is the absolute difference between the means of two independent groups' data sets. Cohen’s ‘d’ is an improved measure, derived by dividing this result by the standard deviation of either group (Cohen, 1988) and is commonly used (Thalheimer, 2002).

 

Effect size is useful as a measure of the between-groups difference between means, particularly when measurements have no intrinsic meaning, as is often the case with data generated from Likert-style scales (Sullivan & Feinn, 2012, p279).  Hence at an early stage of planning the data analysis process, effect size measures were chosen to be the main data analysis outcomes although in preference to Cohen’s ‘d’, the alternative effect size measure of Hedges' ‘g’ was used because this measure takes better account of the sample sizes of the relative distributions by using a 'pooled' (that is, a weighted) standard deviation in the effect size calculation (Cumming, 2010). This is especially appropriate when the sample sizes are notably different, which is the case in this project.

Principal Component Analysis

 

The process of Principal Component Analysis (PCA) performs dimensionality reduction on a set of data, and especially a scale that is attempting to evaluate a construct. The point of this process is to see if a multi-item scale can be reduced into a simple structure with fewer components (Kline, 1994).

 

As a useful precedent, Sander and Sanders (2003) recognized that dimension reduction may be appropriate and had conducted a factor analysis of their original, 24-item ABC Scale which generated a 6-factor structure, the components of which were designated as Grades, Studying, Verbalizing, Attendance, Understanding, and Requesting. Their later analysis of the factor structure found that it could be reduced into a 17-item scale with 4 factors, which were designated as Grades, Verbalizing, Studying and Attendance (Sander & Sanders, 2009). The reduced, 17-item ABC Scale merely discounts 7 dimensions from the original 24-item scale which is otherwise unamended, and so in this project it was considered appropriate to deploy the full, 24-item scale to generate an overall mean ABC value in the analysis so that an alternative 17-item overall mean ABC value could also be calculated to examine how this may impact on the outcomes.

 

But much like Cronbach's 'alpha' as a measure of internal consistency, factor analysis is ascribable to the dataset onto which it is applied and hence, the factor analysis that Sander and Sanders (ibid) used and which generated their reduced item scale with four factors was derived from analysis of the collated datasets they had available from previous work with ABC, sizeable though this became (n=865). It was considered therefore that the factor structure that their analysis suggested may not necessarily be entirely applicable more generally and without modification or local analysis, despite being widely used by other researchers in one form (ABC24-6) or another (ABC17-4) (e.g.: de la Fuente et al., 2013; de la Fuente et al., 2014; Hilale & Alexander, 2009; Ochoa et al., 2012; Willis, 2010; Keinhuis et al., 2011; Lynch & Webber, 2011; Shaukat & Bashir, 2016). Indeed when reviewing the ABC Scale, Stankov et al. (in Boyle et al., 2015) implied that more work should be done on consolidating some aspects of the ABC Scale, not so much by levelling criticism at its construction or theoretical underpinnings but more so to suggest that as a relatively new measure (> 2003) it would benefit from wider applications in the field and subsequent scrutiny about how it is built and what it is attempting to measure.

 

Hence conducting a factor analysis of the data collected in this project using the original 24-item ABC Scale is worthwhile because it may reveal an alternative factor structure that fits the context of this enquiry more appropriately.

Andrew Dykes B.Ed., M.A., M.Sc., CELTA, FHEA

Academic confidence and dyslexia at university

A PhD Research Project October 2014 - May 2019

Middlesex University, London

Andrew Dykes B.Ed, M.A, M.Sc, FHEA

ad1281@live.mdx.ac.uk; academic@ad1281.uk

+44 (0)79 26 17 20 26