Academic confidence and dyslexia at university

Research

 

Section 3 revised March 2020

Research Design - Methodology and Methods

 

3.1 Research Design Overview

The research aim was twofold: firstly, to establish the extent of all participants' 'dyslexia-ness', this to be the independent variable; secondly, to gauge their academic confidence in relation to their studies at university, the dependent variable. This section describes the strategic and practical processes of the project that were planned and actioned. Details are provided about how practical processes have been designed and developed to enable appropriate data sources to be identified; how the research participants were identified and contacted; how data has been collected, collated and analysed so that the research questions can be properly addressed. The rationales for research design decisions are set out and justified, and where the direction of the project has diverted from the initial aims and objectives, the reasons for these changes are justified. This includes the elements of reflective processes that have underpinned project decision-making and re-evaluation of the focus of the enquiry where this has occurred.

Design Focus

The project has taken an explorative, mixed methods design focus. This is because firstly, little is known about the interrelationships between academic confidence and dyslexia. Hence, no earlier model has been available to provide guidance. Secondly, although data collected were mainly quantitative - Likert scale item responses were transformed into numerical data for analysis - some qualitative data were also collected through a free-writing area included in the research questionnaire. In this way, hypotheses generated from the research questions were addressed objectively using the outputs from the statistical analysis of the quantitative data, with free-writing comments from participants intended to provide more depth to discussion points later. This is reported fully In Section 4, Results and Analysis.

 

3.2      Research participants: Groups and Subgroups

The participants were all students at university and no selective nor stratified sampling protocols were used in relation to gender, academic study level or study status - that is, whether an individual was a home or overseas student. However, all three of these parameters were recorded for each respondent, and these data have been used throughout the analysis and discussion when considered apposite. It is possible that a later study may consider re-visiting the data to explore differences that may emerge through stratified analysis.

The objective was to establish a sizeable research datapool through convenience sampling that comprised two groups: the first was to be as good a cross-section of HE students as may be returned through voluntary participation in the project. Participants in this group were recruited through advertisements posted on Middlesex University’s student-facing webpages during the academic year 2015-16. The second group was to be students known to have dyslexic learning differences. These were recruited through the University’s Dyslexia and Disability Service student e-mail distribution list. Recruitment was incentivized by offering participants an opportunity to enter a prize draw subsequent to completing the questionnaire. Amazon vouchers were offered as prizes. From the group of non-dyslexic students, it was hoped that a subgroup of students presenting quasi-dyslexia could be identified. It was of no consequence that students with dyslexia may have found their way to the questionnaire through the links from the intranet rather than as a response to the Disability and Dyslexia Service's e-mail, because the questionnaire requested participants to declare any dyslexic learning challenges. Hence, participants would be assigned into the appropriate research group from either recruitment process.

 

Thus, three distinct datasets were established:

  • Students with known dyslexia - designated Research Group DI;

  • Students with no known dyslexia - designated Research Group ND;

    • students with quasi-dyslexia - designated Research Group DNI, being a subset of Research Group ND.

 

Hence, it was possible to compare levels of academic confidence between the three groups.

 

3.3  Data Collection

I - Objectives

As this project is focused on finding out more about the academic confidence of university students and relating this to levels of dyslexia-ness, the data collection objectives were:

  • to design and build a data collection instrument that could gather information about academic confidence and aspects of dyslexia-ness, expediently and unobtrusively from a range of university students, in information formats that could easily be collated and statistically analysed once acquired;

  • to ensure that the data collection instrument was as clear, accessible and easy-to-use as possible, noting that many respondents would be dyslexic;

  • to ensure that the data collection instrument could acquire information quickly (15 minutes was considered as the target) to maintain research participant interest and attention;

  • to design an instrument that could be administered online for participants to engage with at their convenience;

  • to enable participants to feel part of a research project rather than its subjects, and hence engage with it and provide honest responses;

  • to maximize response rates and minimize selection bias for the target audience;

  • to ensure compliance with all ethical and other research protocols and conventions for data collection according to guidelines and regulations specified by the researcher's home university.

 

These objectives were met by designing and building a self-report questionnaire. Carefully constructed survey questionnaires are widely used to collect data on individuals' feelings and attitudes that can be easily numericized to enable statistical analysis (Rattray & Jones, 2007). Questionnaires are one of the most commonly used processes for collecting information in educational contexts (Colosi, 2006). This rationale falls within the scope of survey research methodology in which the process of asking participants questions about the issues being explored are a practical and expedient process of data collection, especially where more controlled experimental processes such as might be conducted in a laboratory, or other methods of observing behaviour are not feasible (Loftus et al., 1985). Evidence shows that self-report questionnaires have been found to provide reliable data in dyslexia research (e.g.: Tamboer et al., 2014; Snowling et al., 2012). Developments in web-browser technologies and electronic survey creation techniques have led to the widespread adoption of questionnaires that can be delivered electronically across the internet (Ritter & Sue, 2007) and so this process was used. The ability to reach a complete community of potential participants through the precise placement and marketing of a web-based questionnaire was felt to have significant benefits. These included:

  • the ability for the researcher to remain inert in the data collection process to reduce any researcher-induced bias;

  • the ability for respondents to complete the questionnaire privately, at their own convenience and without interruption, which it was hoped would lead to responses that were honest and accurate;

  • ease of placement and reach, achieved through the deployment of a weblink to the questionnaire on the home university's website;

  • ease of data receipt using the standard design feature in online surveys of a 'submit' button to generate a dataset of the responses in tabular form for each participant, automatically sent by e-mail to the researcher's university mail account;

  • the ability to ensure that participant consent had been obtained by linking agreement to this to access to the questionnaire;

  • the facility for strict confidentiality protocols to be applied whereby a participant's data, once submitted, were to be anonymous and not attributable to the participant by any means.

 

Every questionnaire response received was anonymised at the submission point with a randomly generated 8-figure Questionnaire Response Identifier (QRI). The QRI was automatically added to the response dataset by the post-action process for submitting the form as an e-mail. Should any participant subsequently request revocation of data submitted, this was achieved by including the QRI in the revocation request form, also submitted electronically and received anonymously. In fact, no participants requested this.

II - Questionnaire design rationales

The questionnaire was designed to be as clear and as brief as possible. Notably, guidance provided by the British Dyslexia Association was helpful in meeting many of the design objectives, with other literature consulted about designing online and web-based information systems to ensure better access for users with dyslexia. Particular attention was paid to text formats and web design for visually impaired and dyslexic readers, to improve readability to assist with dyslexia compliance (Gregor & Dickinson, 2007; Al-Wabil et al., 2007; Evett & Brown, 2005).

Secondly, a range of later research was consulted to explore how dyslexia-friendly online webpage design may have been reviewed and updated in the light of the substantial expansion over the last two decades of online learning initiatives. These have developed within HE institutions through virtual learning environments (VLEs) and digital learning object platforms such as Xerte (Xerte Community, 2015) and Articulate (Omniplex Group, 2018), and from external sources such as MOOCs and free-course providers such as FutureLearn (Open University, 2018), all of which rely on modern web-browser functionality (Rello et al., 2012; Chen et al., 2016; Berget et al., 2016).

 

Additionally, literature was consulted to understand how the latest HTML5 web technologies and the rapid rise in smart mobile device use were influencing universal web design (Riley-Huff, 2012; 2015; Henry et al., 2014; Fogli et al., 2014; Baker, 2014). It was apparent that online information presentations which enshrined strong accessibility protocols not only enabled better access for those with dyslexia, or who experienced visual stress or other vision differences, but provided better accessibility and more straightforward functionality for everyone (McCarthy & Swierenga, 2010; Rello et. al, 2012; de Santana et.al., 2013). Other literature was consulted for guidance about the impact of design and response formats on data quality (Maloshonok & Terentev, 2016), on response and completion rates (Fan & Yan, 2010), on the effectiveness of prize draw incentivizations (Sanchez-Fernandez et al., 2012) and invitation design (Kaplowitz et al., 2011), and about web form design characteristics recommended for effectiveness and accessibility (Baatard, 2012).

 

Part of the questionnaire design stage included a review of existing web survey applications to determine if any provided sufficiently flexible design customizability to meet the design specifications that had been scoped out. Applications (apps) reviewed were Google Forms (Google, 2016), SurveyMonkey (Survey Monkey, 2016), SurveyLegend (Survey Legend, 2016), Polldaddy (Automattic, 2016), Survey Planet (Survey Plant, 2016), Survey Nuts (Zapier Inc., 2016), Zoho Survey (Zoho Corp., 2016) and Survey Gizmo (Widgix, 2016). It was found that the limitations of these proprietary survey design apps were numerous and broadly similar, such as: limited number of respondents per survey; strictly constrained design and functionality options; advertising, or custom branding. These features were only removable by subscribing to payment plans. None of the apps reviewed included the functionality of range input sliders.

 

Hence the project questionnaire was designed within these design rationales:

  • it was an online questionnaire that rendered properly in at least the four most popular web-browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Safari (usage popularity respectively 69.9%, 17.8%, 6.1%, 3.6%, data for March 2016 (w3schools.com, 2016)). Advice was provided in the questionnaire pre-amble that these were the best web-browsers for viewing and interacting with the questionnaire; links were provided for downloading the latest versions of the two most popular browsers;

  • text, fonts and colours were carefully chosen to ensure that the questionnaire was attractive to view and easy to engage with, meeting W3C Web Accessibility Initiative Guidelines (W3C WAI, 2016);

  • an estimate was provided about completion time (15 minutes);

  • questions were grouped into five, short sections, each focusing on a specific aspect of the research, with each question-group viewable one section at a time. This was to attempt to reduce survey fatigue and poor completion rates (McPeake et al., 2014; Ganassali, 2008; Flowerdew & Martin, 2008; Marcus et al., 2007; Cohen & Manion, 1994); In the event, only 17 of the 183 questionnaires returned were incomplete (9.2%).

  • the substantial part of the questionnaire used Likert-style items in groups, presenting response options using range sliders to gauge agreement with the statements;

  • the questionnaire scale item statements were written as neutrally as possible, or in instances where this was difficult to phrase, a blend of negative and positive phrasing was used (e.g.: Sudman & Bradburn 1982). This was an attempt to avoid tacitly suggesting that the questionnaire was evaluating the impacts of learning difficulty, disability or other learning challenge on studying at university, but rather that a balanced approach was being used to explore a range of study strengths as well as challenges. Account was taken of evidence that wording polarity may influence respondents' answers to individual questions with 'no' being a more likely response to negative questions than 'yes' is, to positively worded ones (Kamoen et al., 2013); but that the widely claimed supposition that survey items worded negatively as an attempt to encourage respondents to be more attendant to them, or that mixing item polarity may be confusing to respondents, claimed through internal reliability analysis, was dubious at best (Barnette, 2000). Hence applying scale item statement neutrality where possible, was considered as the safest approach for minimizing bias;

  • a free-writing field was included to encourage participants to feel engaged with the research by providing an opportunity to make further comments about their studies at university in whatever form they wished. This had proved to be a popular feature in the preceding dissertation questionnaire (Dykes, 2008), providing rich, qualitative data;

  • on submission, an acknowledgement (receipt) page was triggered, a copy of the responses submitted could be inspected, and revocation of the data could be requested if desired;

III - Questionnaire components

After a short preamble about the project, instructions for completing and submitting the questionnaire were provided. The questionnaire comprised three main sections: The first presented demographic data fields that all participants were to complete. The second section comprised quantitative data collection fields to explore academic confidence and dyslexia-ness. The final section collected qualitative data. Closing statements at the end of the questionnaire included the 'submit' button.

1. Demographic data

 

Data was collected on gender, student domicile ('home' or 'overseas') and student study level, ranging from Foundation (Level 3/4 (UK Quality Code for Higher Education (QAA, 2014)) to post-doctoral researcher (Level 8 (ibid)). This section also asked students with dyslexia how they learned of their dyslexia, thus collecting data to address Hypothesis 3. For this, dyslexic students chose options from two drop-down menus to complete a sentence (Fig. 8).

 

 

 

 

 

 

 

 

 
 
2. Quantitative data

 

This section comprised firstly the ABC Scale verbatim (Sander & Sanders, 2006, 2009; see Appendix 8.2(II)). In the final iteration of the questionnaire, this was followed by a set of 60 statements designed to gauge dyslexia-ness which was in two parts although these were not visibly separated in the questionnaire. The first part comprised 36 statements being six psychometric subscales of six statements each, attempting to gauge respectively: Learning Related Emotions; Anxiety Regulation and Motivation; Self-Efficacy; Self-Esteem; Learned Helplessness; and Academic Procrastination. The second part comprised the 24-statement Dyslexia Index Profiler, developed later and subsequently used as the principal discriminator to identify the subgroup of quasi-dyslexic students (below).

Likert Scales

Likert-style scales were used to collect quantitative data throughout the questionnaire.  Participants reported their degree of agreement with each scale-item statement using a continuous response scale approach, developed for this project in preference to traditional, fixed anchor point scale items. When conventional, fixed anchor points are used - commonly 5- or 7-points - the data produced are numerically coded so that they can be statistically analysed. There is some debate about whether data coded in this way justifies parametric analysis because the coding process assigns arbitrary numerical values to non-numerical data collection responses, hence generating a discrete variable. This is usually an essential first stage of the data analysis process, but one which then makes the data neither authentic nor actual (Carifio & Perla, 2007; Carifio & Perla 2008; Ladd, 2009). However, the issue about conducting parametric analysis on data generated from Likert-style scales remains controversial, aggravated by a tendency amongst researchers to not clearly demonstrate their understanding of the differences between Likert-style scales and Likert-style scale items (Brown, 2011), compounded by not properly clarifying whether their scales are gauging nominal, ordinal, or interval (i.e. continuous) variables. Hence by using range sliders, data collected would be as close to continuous as possible, thus enabling parametric analysis to be reasonably conducted. (Jamieson, 2004; Pell, 2005; Carifio & Perla, 2007; 2008; Grace-Martin, 2008; Ladd, 2009; Norman, 2010; Murray, 2013, Mirciouiu & Atkinson, 2017). In this questionnaire, the continuous scales were set as percentage agreement with each statement, ranging from 0% to 100%.

 

Section 1: The Academic Behavioural Confidence Scale:

 

Academic confidence was assessed using the existing, ABC Scale (Sanders, 2006b). There is a range of research that has found this to be a good evaluator of the academic confidence of university-level students by examining their study behaviours and actions (see Section 2.2). No other metrics have been found that explicitly focus on gauging confidence in academic settings (Boyle et al., 2015). Evaluators exist to measure self-efficacy or academic self-efficacy, which, as also described in Section 2, is considered to be the umbrella construct that includes academic confidence. However, of all such measures, the ABC Scale most closely matched the research objectives of this study. The

5-anchor-point Likert responders in the published ABC Scale were replaced with continuous range sliders so that parametric statistical analysis could be conducted on the.

Following precedents set (Sander & Sanders, 2006a, 2009), principal component analysis (PCA) applied dimensionality reduction to explore the factor structure of the ABC Scale for data collected in this study. This was to determine whether this might enable a useful cross-factorial analysis to be conducted with outputs from the Dyslexia Index Profiler (reported in sub-section 4.#), and to comment on the generalizability of the factor structure of the ABC Scale suggested in the Sander & Sanders studies. Of many studies found which use the ABC Scale as their principal metric, only one has been revealed (Corkery et.al, 2011) which applied a study-specific factor analysis to the data collected in lieu of applying the existing factor structure of the Scale.

 

Section 2, Part 1: Six Psychometric Scales:

 

The was a legacy of the earlier, MSc dissertation (Dykes, 2008) where similar scales had been used in the data collection questionnaire. In the early stage of the research design process for this current study, it was planned that these six subscales would be combined into a profile chart that would have sufficient discriminative power to enable quasi-dyslexic students to be identified from the group of non-dyslexic students, thus drawing on the data analysis process adopted in the earlier study.  This would be achieved though visual discrimination between the radar chart profiles generated from the 6 subscales for each respondent when this was overlaid onto the profile charts generated from the mean average data for students in the dyslexic group and the non-dyslexic group respectively (Fig 9).

 

The rationale was based on evidence from literature which suggesed that discernible differences exist between dyslexic and non-dyslexic individuals for each of these six constructs. For example, levels of self-esteem are depressed in dyslexic individuals in comparison to their non-dyslexic peers (e.g.: Riddick et al., 1999; Humphrey, 2002; Burton, 2004; Alexander-Passe, 2006; Terras et al., 2009; Glazzard, 2010; Nalavany et al., 2013). Furthermore, Humphrey and Mullins (2002) looked at several factors that influenced the ways in which dyslexic children perceived themselves as learners, identifying learned helplessness as a significant characteristic; and a study by Klassen et al. (2008) compared levels of procrastination between students with and without dyslexia finding that dyslexic students exhibit significantly higher levels of procrastination when tackling their academic studies at university in comparison to students with no indication of dyslexia. Extensive literature exists supporting the impact of dyslexia on the remaining three constructs.

 

To develop the charts in this current project, pseudo-data were generated to simulate mean outputs for typically dyslexic, and typically non-dyslexic individuals, based on stereotypical rationales built from personal, practitioner experience of working with students with dyslexia at university together with evidence from the previous study. A known non-dyslexic individual was then used to generate a sample respondent profile to overlay onto the simulated mean profiles. The aim was to gauge whether a 'by eye' judgement for spotting profile anomalies would have sufficient discriminative power to be confident that the process could identify quasi-dyslexic students from the non-dyslexic group. Although the development of a more data-analysis-based criteria was considered, this seemed unlikely to be easily formulated, and nothing similar was found in existing literature. The resulting, overlapping visualizations were distinct (Fig. 9, generated from observed data collected later from the quasi-dyslexic subgroup), but it was considered doubtful that the complete set of profiles  would show sufficiently discernible differences to be accurately used as a discriminating tool. Hence this approach was abandoned in lieu of developing an alternative, quantitative process as the discriminator between dyslexic, non-dyslexic, and quasi-dyslexic students, which emerged as the Dyslexia Index Profiler. However, the profile chart visualizations were intriguing, suggesting that this data may have value and so this section of the questionnaire was not deleted. The complete set of profile charts was constructed after data had been collated and has been reserved so that the idea may be explored and reported later, perhaps as part of a subsequent study.

Figure 8: Selecting how dyslexic students learned of their dyslexia

Figure 9: The profile chart

Section 2, Part 2: The Dyslexia Index Profiler

 

Thus, developing Dyslexia Index (Dx) Profiler became a major component of the research design process because after abandoning the profile charts idea, it was necessary to establish a fresh, robust mechanism to identify quasi-dyslexic students from the non-dyslexic group without overtly screening for dyslexia. The entire project relied on this,on  as the main focus was to discover whether levels of academic confidence are influenced differently by dyslexia, quasi-dyslexia, and non-dyslexia, with the aim of suggesting that differences that emerge may be attributable to the dyslexic label, were the data to substantiate this conjecture. This sub-section sets out firstly the background to this development, followed by a description of the methods and processes devised, concluding with the results and outcomes.

I. Background and rationale

Development of the Dx Profiler has been a complex process that was grounded in pertinent theory about the broad, and multifactorial nature of dyslexia (discussed in Section 2.1). To have used a proprietary dyslexia screener would have raised ethical challenges related to disclosure for participants in the non-dyslexic group (discussed previously, see Section ##), hence compromising the requirement for data collection anonymity. Stated use of a screener may also have introduced bias where respondents who were not (identified as) dyslexic may have answered some parts of the questionnaire untruthfully through fear of being identified as dyslexic. Such fear is widely reported, in particular, amongst health professionals (e.g.: Shaw & Anderson, 2018; Evans, 2014; Ridley, 2011; Morris & Turnbill, 2007; Illingworth, 2005).

 

A metric was required for gauging dyslexia-ness by viewing university study attributes and behaviours through the lens of dyslexia, but which was not designed to be a dyslexia screener. It has been stated previously that many students with dyslexia at university may have developed strategies to compensate for literacy-based difficulties experienced in earlier learning histories, partly by virtue of their academic capabilities (see Section 2). Hence in HE contexts, other aspects of the dyslexic self can impact significantly on academic study and so to consider dyslexia to be only a literacy issue, or to focus on cognitive aspects such as working memory and processing speeds, may be erroneous (Cameron, 2015). Procedures which enable effective self-managed learning strategies to be developed need to be considered (Mortimore & Crozier, 2006), especially as self-regulated learning processes are recognized as a significant feature of university learning experiences (Zimmerman & Schunk, 2011; Broadbent & Poon, 2015).

 

The Dyslexia-ness Continuum

The broad definition of dyslexia outlined by the BDA acknowledges much of this wider discourse about the nature and aetiology of the syndrome, discussed previously (sub-section 2.1). Critically, this definition frames dyslexia as a continuum, which firstly acknowledges that categorical distinctions within the syndrome are problematic; but also suggests that no clear-cut point along this continuum can be universally fixed to indicate the boundary between dyslexic, and non-dyslexic individuals despite. This is despite the desire to do so, not least to enable decisions to be made concerning the award of financial learning support allowances for students at UK universities.

Adopting the continuum approach, therefore, adds substance to the concept of 'dyslexia-ness', introduced for this current study. Thus, it is reasonable to infer that the characteristics and attributes of dyslexia that are embraced within the definition, and which are the components of dyslexia-ness, might be measured in some way once distilled back into dimensions. This leads to the possibility for exploring either single dimensions, or groups of dimensions (perhaps combined into factors), or the complete the complete portfolio of dimensions - that is, dyslexi-ness. According to their dyslexia-ness 'score', it will be possible to locate quasi-dyslexic and non-dyslexic individuals at some point along the continuum relative to their more dyslexic peers, or sift individuals who share similar levels of dyslexia-ness into sub-groups.

 

Hence by considering the dyslexia-ness continuum as a continuous, independent variable, other study attributes, such as academic confidence, can be examined as the corresponding dependent variable. Thus tentative comparisons might be made between groups and sub-groups of, in this case, students at university, naturally leading to a mechanism for deducing more generalized results. Indeed, the idea of a dyslexia-ness continuum, might warrant further development, the first part of which should be to devise an alternative descriptor for it that removes, or at least dilutes, the allusion to the continuum being an evaluation of dyslexia, instead, that it is a continuum of learning development characteristics, skills and behaviours that has meaning and relevance in higher education contexts. Whilst this is not to ignore or dismiss the idea of dyslexia per se, such a process might help to relocate it more positively within a multifactorial portfolio of learning and study attributes that could also reduce much of the stigmatization associated with difference in learning contexts.

Hence to operationalize the dyslexia-ness continuum through the Dyslexia Index Profiler, these design criteria were established:

  • the profiler was to be a self-report tool requiring no administrative supervision;

  • the profiler item statements were to be as applicable to non-dyslexic as to dyslexic students;

  • it would include a balance of literacy-related and wider, academic learning-management and study-behaviour evaluators;

  • it would include elements of learning biography;

  • although Likert-style based, scale item statements were to avoid fixed anchor points by presenting respondent selectors as a continuous range option;

  • scale item statements would aim to minimize response distortions potentially induced by negative affectivity bias (Brief, et al., 1988);

  • scale item statements would aim to minimize respondent auto-acquiescence, that Is, 'yea-saying', being the often-problematic tendency to respond positively to attitude statements (Paulhaus, 1991). Thus, the response indicator design would require a fine gradation of level-judgment to be applied;

  • although not specifically designed into the suite of scale-item statements at the outset - which were presented in a random order - natural groupings of statements as sub-scales were expected to emerge, leading to the possibility for factor analysis to be applied later, if appropriate;

  • scale item statements were to avoid social desirability bias, that is, the tendency of respondents to self-report positively, either deliberately or unconsciously. In particular, an overall neutrality should be established for the complete Dx Profiler so that it would be difficult for participants to guess how to respond to present themselves in a favourable light (Furnham & Henderson, 1982).

II. Designing the Dx Profiler

In addition to being grounded in the most recent BDA definition of dyslexia, several other evaluators were consulted for guidance. In particular: the BDA's Adult Checklist developed by Smythe and Everatt (2001); the original Adult Dyslexia Checklist proposed by Vinegrad (1994), upon which many subsequent checklists appear to be based; and the later, York Adult Assessment (YAA) (Warmington et al., 2012) which has a specific focus as a screening tool for dyslexia in adults, were all explored. Despite the limitations outlined earlier (sub-section 2.1(VII)), the YAA was found to be usefully informative. Also consulted and adapted has been the 'Myself as a Learner Scale' (Burden, 2000); the useful comparison of referral items used in screening tests which formed part of a wider research review of dyslexia by Rice and Brooks (2004); and especially more recent work by Tamboer and Vorst (2015) where both their own self-report inventory of dyslexia for students at university, and their useful overview of previous studies were consulted.

 

Drawing from all of these sources, and from supporting literature, a portfolio of 20 statements was devised for gauging attributes of study behaviours and learning biography that are known to present characteristic differences between dyslexic and non-dyslexic students, thus setting out the framework for the Dx Profiler (Table #1).

 

Table #1: Dx Profiler statements, dyslexia attributes, and supporting references

* [...] refers to sub-sections in this thesis where this reference is used to support the discussion point

The Profiler was to be closely aligned with the BDA (2018) definition of dyslexia, as adopted for this current study, (see Section 2.1(I)), and this definition was distilled into three components: language and literacy skills; thinking and processing skills (encompassing issues related to working/short-term memory, but also to include creative strengths); and organization and time-management competencies. The statements were then distributed across the three components accordingly, setting out a framework that could be compared with post-hoc factor analysis of observed data acquired from participants in this study later. The aim was to determine whether a reasonable factor structure could be devised, possibly as a starting point for validation or amendment of the Profiler given that this was a newly devised metric (see sub-section 4.#).

COMPONENT: Literacy and language

  • accurate and fluent word reading and spelling;

  • phonological awareness;

  • [other] aspects of language (eg: writing coherence);

  • visual processing challenges;

Dimension:

'When I was learning to read at school, I often felt I was slower than others in my class'

'My spelling is generally good'

'In my writing, I frequently use the wrong word for my intended meaning'

'When I'm reading, I sometimes read the same line again or miss out a line altogether'

'I have difficulty putting my writing ideas into a sensible order'

'In my writing at school I often mixed up similar letters, like 'b' and 'd' or 'p' and 'q''

'My tutors often tell me that my essays or assignments are confusing to read'

'I get really anxious if I'm asked to read 'out loud''

COMPONENT: Thinking, processing, memory

  • verbal memory;

  • verbal processing speed;

  • mental calculation;

  • concentration;

  • information synthesis;

  • design, problem-solving ingenuity, creativity;

Dimension:

'I can explain things to people much more easily verbally than in my writing'

'I get in a muddle when I'm searching for learning resources or information'

'I'm hopeless at remembering things like telephone numbers'

'I find following directions to get to places quite straightforward'

'I prefer looking at the 'big picture' rather than focusing on the details'

'My friends say I often think in unusual or creative ways to solve problems'

'I find it really challenging to follow a list of instructions'

'I get my 'lefts and 'rights' easily mixed up'

COMPONENT: Organization and time management

  • personal organization;

Dimension:

'I find it very challenging to manage my time efficiently'

'I think I am a highly organized learner'

'I generally remember appointments and arrive on time'

'When I'm planning my work, I use diagrams or mindmaps rather than lists or bullet points'

It was also particularly important to gauge the sensitivity of the Profiler by noting how consistently it would verify previously identified dyslexic students as dyslexic. Although this might also lead to some confidence about the Profiler's specificity, without applying more conventional dyslexia screeners to non-dyslexic students in the datapool, declarations about specificity would be tentative.

To further identify with the contemporary view that dyslexia is a continuum rather than a categorical construct, and that the multifactorial nature of the syndrome implies that attributes are presented in varying degrees in each individual, some of the attributes devised are not likely to be uniquely located into any single component. For example, it is reasonable to suppose that the statement 'I get in a muddle when I'm searching for learning resources or information' may be variably influenced by criteria from the skillsets of all three components. How this variability may appear was unknown at the design stage of the Dx Profiler due to the unique, individual distribution of attributes across factors. Nevertheless, a draft of a possible typical, mean-average mapping was constructed as a starting point (Fig #) which would be compared later with the output derived from the factor analysis of observed data, where attribute-factor overlap would be determined by the relative factor loadings (see Fig #, sub-section 4.#).

Fig 10: Dyslexia dimensions distributed across BDA components

A development of this work may be to further explore the characteristics of the Dx Profiler across a much wider community of students, with a particular focus on examining how it might constructively contribute to an improved understanding of the learning and study behaviours of all students at university in learning development contexts.

 

III. Validating the Dx Profiler

Before deploying the Dx Profiler as part of the research questionnaire, two further factors were considered pertinent: firstly, it was important to gain a tentative confirmation that the statements devised resonate with the learning and study experiences of students at university, and hence might be a realistic attempt to gauge the levels of dyslexia-ness of participants in this current project; and secondly, that a reasonable estimate of the prevalence of each dimension could be gained to support the adjustment of the numerical overall output of the Dx Profiler using a weighted rather than a simple mean-average of scores obtained from the complete set of 20 dimensions, which was considered likely to skew the outputs.

To this end, it was considered prudent to obtain feedback about the proposed portfolio of statements before finalizing the Dx Profiler and incorporating it into the main research questionaire. As the Profiler was to be a metric for use in university settings, the rationale for obtaining such feedback focussed on obtaining data from that environment, specifically, from dyslexia support professionals who are likely to have day-to-day ineractions with dyslexic students at university.

Rationale, Methods and Processes

The rationale for this enquiry was threefold:

  • By exploring the prevalence of attributes (dimensions) of dyslexia observed in the field in addition to those distilled through the theory and literature reviewed to that point, it was hoped that the data acquired would confirm that the dimensions being gauged were appropriate and recognizable features of the learning and study profiles of dyslexic students at university;

  • Through analysis of the data collected, value weightings could be ascribed to each dimension based on their reported prevalence. Hence the output of the Dx Profiler in the main research questionnaire could account for the likely relative influence of each dimensions by generating a weighted-mean average level of dyslexia-ness for each respondent;

  • Feedback could be sought about the design and operation of the continuous range input sliders, as these were planned to be extensively used in the main questionnaire later. 

Hence, a brief internet-based poll was designed, built and hosted on the project's webpages which sought to gauge the prevalence and frequency of dyslexia characteristics and attributes that were to be incorporated into the Dx Profiler. Hence, the poll contained 18 of the 20 statements, exactly mirroring those to used in the Dx Profiler later (see Appendix 8.#). The two statements relating to learning biography included in the Dx Profiler were omitted from this poll, as it was considered unlikely that the dyslexia support tutors targeted as participants would be able to comment on these. The list of statements was prefixed with the question: 'In your interactions with students, to what extent to you encounter each of these dimensions?' Respondents recorded their answer as a percentage where 0% indicated 'never encountered', 50% indicated 'encountered in about half of interactions', and 100% indicated 'all the time'. 

Recruitment of Participants

 

Of the 132 UK Higher Education Institutions identified through the Universities UK database, 116 were identified with Student Support Services that included an indicated provision for students with dyslexia, generally as part of more general services for students with disabilities. This was established through careful inspection of each institution's outward-facing webpages. Most of these also provided a specific e-mail address for contacting either the team of dyslexia specialists directly, or otherwise a more general enquiry address for student services. All 116 institutions were subsequently contacted by e-mail, requesting participation in the enquiry by including a link to the poll. In the event, a response rate of only 30/116 institutions completed the poll, which although was disappointing, was considered sufficient for providing some meaningful feedback as desired.

Process

The preamble to the poll described its purpose, provided instructions about how to complete it, and how to request withdrawal of data (revocation) after submission in the event that a participant had a change of heart about taking part. How the poll was related to the this current study's main research was also stated, as was an offer to share the findings of the poll given that a contact e-mail address was supplied. Participants provided their response to the prevalence of each dyslexia-ness dimension by adjusting the input slider along the range from 0-100% (Fig 11).

 

 

 

The incorporation of continuous rating scales, often referred to as visual analogue scales, in online survey research is relatively new but becoming more widespread, not least because the process is now becoming easier to implement in web-survey designs. Hence the effects of such innovations on data quality and participant responses are also beginning to attract research interest (Treiblmaier & Flizmoser, 2011), which, for example, suggests that using input-range sliders can increase data quality (Funke & Reips, 2012). The default position was set at the midpoint of the slider scale, noting that the default position of input range sliders has been reported to have no significant impact on output (Couper et al., 2006).

It was expected that respondents would naturally dis-count repeat visitors from their estimates of dimension prevalence although to do so was not made explicit to keep the preamble as brief and uncomplicated as possible. Space was provided near the end of the poll for participants to submit any comments about either the enquiry itself or about features of the poll. An invitation was also made to submit information about any additional attributes or characteristics of dyslexia-ness that were regularly encountered.

Results and Outcomes

 

Data received from the poll submissions were collated, and in the first instance the mean average prevalence for each dimension was calculated, derived from the average frequency (that is, extent) that each dimension was encountered (Table #2).

Fig.11 Continuous range input slider for Dx Dimension 04

Table #2: Prevalence of dyslexia dimensions

24 participants reported additional attributes encountered in their work with dyslexic students, and where these were provided, most also included % prevalence:

  • poor confidence in performing routine tasks [reported by 4 respondents with prevalence respectively: 90%; 85%; 80%; % not reported (n/r)]

  • slow reading [100%; 80%; n/r]

  • low self-esteem [85%; 45%]

  • anxiety related to academic achievement [80%; 60%]

  • pronunciation difficulties / pronunciation of unfamiliar vocabulary [75%; 70%]

  • finding the correct word when speaking [75%; 50%]

  • difficulties taking notes and absorbing information simultaneously [75%; n/r]

  • getting ideas from 'in my hear' to 'on the paper' [60%; n/r]

  • trouble concentrating when listening [80%]

  • difficulties proof-reading [80%]

  • difficulties ordering thoughts [75%]

  • difficulties remembering what they wanted to say [75%]

  • poor grasp of a range of academic skills [75%]

  • not being able to keep up with note-taking [75%]

  • getting lost in lectures [75%]

  • remembering what's been read [70%]

  • difficulties choosing the correct word from a spellchecker [60%]

  • meeting deadlines [60%]

  • focusing on detail before looking at the 'big picture' [60%]

  • difficulties writing a sentence that makes sense [50%]

  • handwriting legibility [50%]

  • being highly organized in deference to 'getting things done' [25%]

  • having to re-read several times to understand meaning [n/r]

  • profound lack of awareness of their own academic difficulties [n/r]

 

The additional attribute reported by the most respondents (4) related to confidence, with slow reading being reported by three respondents. Most other additional attributes were reported by only one respondent.

IV. Discussion

Although the response rate for this small-scale poll was disappointing, (30 respondents out of 116 invitations to participate), it was considered that the data collected raised confidence that appropriate attributes of dyslexia had been selected to resonate with the typical field experience of dyslexia support professionals working with dyslexic students at UK universities. Although an additional 24 attributes to the 18 provided in the poll were reported as prevalent, the majority of these were reported by only one respondent each, and hence none of these were indicative of a significant omission in the poll design. The additional attribute related to confidence was considered to be accounted for in the Academic Behavioural Confidence Scale, itself forming a major section of the main research questionnaire.

Hence, the 18 dyslexia dimensions were considered to have been validated to a sufficient degree by the outcomes of the poll to form the basis of the Dyslexia Index Profiler. In the first instance, these dimensions were formatted to be more concise; converted into the first person so that participants would feel engaged with the research; and re-phrased where necessary so that the Profiler would be relevant to all students. Secondly, the two additional dimensions relating to learning biography were now included (concerning letter reversal and slow uptake in learning to read). Table #3 shows the final iteration of the complete set of 20 dimensions with weightings assigned as derived directly from the prevalence of dimensions established from the poll. The two additional dimensions were assigned weightings of 0.80 to acknowledge their strong association with early-learning challenges in these literacy skills being widely recognized as markers of dyslexia in children. The statements were ordered randomly to reduce the likelihood of order-effect bias where the sequence of questions or statements in a survey may induce a question-priming effect. This is where a response provided for one statement or question subsequently influences the response for the following question when these appear to be gauging the same or a similar aspect of the construct under scrutiny (McFarland, 1981).

Table #3: Dyslexia-ness dimensions statements and weighting

Generating the Dyslexia Index

Reverse coding

The objective of the Profiler was to generate a numerical output for every student participant - their Dyslexia Index (Dx) - and it was considered appropriate to aggregate the input-values of the Profiler in such a way that a high final Dx value points towards a high level of dyslexia-ness. However, as the Dx Profiler was designed to include a balance of positively and negatively phrased statements (see sub-section 3.2(II)), if dimension-statement values were aggregated without taking account of whether a high or a low value was a marker of strong levels of dyslexia-ness, the Dyslexia Index value would be skewed. For example, for Dimension #2: 'My spelling is generally very good', it is reasonable to expect that a strongly dyslexic participant would be likely to disagree with this statement, and hence record a low value for this dimension. Whereas for Dimension #1, relating to slow uptake of early years basic reading skills, the same respondent may be likely to record a high value, indicating strong agreement with the statement. Hence the value outputs for some statements needed to be reverse-coded to ensure that high values on all statements indicated high levels of dyslexia-ness.

This was a process that could only be achieved after data had been collected from participants in the research later. However, likely 'candidates' for reverse-coding were identified at the design stage 'by eye'. A more rigorous approach was applied post hoc by conducting a Product-Moment Correlation between the dimension being 'tested' and the weighted mean-average of the remaining 19 dimensions, using Dx data collected from participants in the study. It is acknowledged that this process is tentative and limited, not least because some dimensions that remained in the aggregate whilst the dimension being tested was removed could themselves require reverse-coding. This aspect of the Dx Profiler needs more work, not least should the metric be developed for wider deployment which may form the topic for a later project. However, in this current study, the process was considered robust enough to enable the outputs from the Profiler to be used.

 

Table #4 shows the list of 20 dimensions, whether a high (H) or a low (L) value was expected to be a marker of strong dyslexia-ness, the corresponding value for the correlation co-efficient, r , between the dimension and the weighted mean of the remaining nineteen dimensions generated from the complete datapool post-hoc, and whether or not that dimension's output was subsequently reverse-coded (RC). In the event, only dimension #2: 'my spelling is generally very good' was reverse-coded due to a relatively high negative correlation with Dx of r = - 0.51. Of the other dimensions that were expected to require reverse-coding, these correlations with Dx were close to zero, suggesting that the impact on the aggregated final Dyslexia Index by either reverse-coding or not would be minimal.

Table #4: Dyslexia-ness dimensions reverse-coding summary

Calculating Dyslexia Index (Dx)

Table #5 demonstrates the weighted mean calculation of the Dyslexia Index (Dx) using the raw scores (observed values) obtained from a randomly chosen research participant as an example. The Dx output was scaled to a value between 0 and 1000 to more easily distinguish it from the participant's ABC value, derived directly from the unscaled, unweighted mean average of their responses to the 24 statements of the ABC Scale, each gauged in the range 0 to 100.

V. Concluding summary

 

In summary, the Dx Profiler calculated a Dyslexia Index for each respondent in the research datapool, being a weighted mean average of responses to 20 Likert-style item statements, where each aimed to capture data relating to a specific study attribute or behaviour, or an aspect of learning biography. Respondents recorded their strength of agreement with each statement along a continuous range from 0% to 100%. Weightings were derived from the prevalence of characteristics determined through a poll of dyslexia support practitioners. The weighted mean was scaled to provide an output, Dyslexia Index (Dx), in the range 0 < Dx < 1000. With data available following deployment of the main research questionnaire, dimensionality reduction was applied (PCA) to explore the factor structure of the Dx Profiler. This was firstly to compare the output to the speculated structure based on the BDA definition of dyslexia, and secondly to determine with a useful cross-factorial analysis might be conducted with outputs from the ABC Scale. The aim was to explore more thoroughly the associations revealed between academic confidence and dyslexia-ness (reported in sub-section 4.#). This analysis remains tentative and to an extent, speculative, because the size of the sample (n=98) from which it was generated is quite small. A later study could aim to develop the Dx Profiler by collecting data from larger and more varied samples, hence enabling PCA to be more confidently applied.

Through this extensive development process, the Dx Profiler was considered to have met its design specifications and was used confidently to gauge the dyslexia-ness of the participants in the study.

3. Qualitative Data

The final part of the questionnaire collected qualitative data in an optional, unlimited free-writing area. Participants were invited to comment on any aspects of the research, the questionnaire, or their learning experiences at university more generally.

4. Questionnaire pilot

The questionnaire was trialled amongst a small student peer-group (n=10) to gain feedback about its style of presentation, ease of use, the clarity of the questions and statements, the quality of the introduction, the length of time it took to complete, any issues that had arisen in the way it had displayed in the web-browser used, and to elicit any other comments that might Indicate that a review or partial review would be necessary before deployment to the target audience. The outcome of this pilot indicated that other than some minor wording changes, no amendments were required.

 

3.4 Data processing summary

Questionnaire responses were received by e-mail and identified from the data as being either submitted from a student with declared dyslexia or from a student with no declared dyslexia. Subsequently, raw data were transferred into Excel for initial inspection. A Dyslexia Index was calculated for each respondent using the weighted mean average process applied to the 20 scale-items, developed at the design stage of the Dx Profiler in the light of the analysis of the pilot study (see Appendix 8.1(I)). Students from the non-dyslexic group whose Dyslexia Index exceeded Dx = 592.5 were categorized as quasi-dyslexic (see sub-section 4.## for the discriminating rationale). Each respondent’s ABC score was calculated using a non-weighted mean average of the 24 scale-item responses which each offered a range from 0 to 100, leading to an output of 0 < ABC < 100. (Fig 9).

The complete datapool was transferred into SPSS v24 (IBM Corp, 2016) for further analysis. .

Fig11DataProcFlow.png

Figure 9:   Data processing flowchart.

 
THIS SUB-SECTION: Revise and move to Section 4

VI    Statistical tools and processes

Use of the t-test in preference to ANOVA

 

Through the adoption and adaptation of the ABC Scale and the careful design and development of the Dyslexia Index Profiler, both of these metrics are considered as continuous variables, being the dependent, independent measures respectively in this study. Although the datapool has been sifted into the three research subgroups described above, dyslexia-ness remained as a continuous variable across the complete datapool and thus, individual students' data response pairings across the two variables were preserved. This would enable a regression analysis to be considered later to determine whether there exists any predictive association between dyslexia-ness and academic confidence.

 

The focus of the data analysis in this enquiry has been to determine whether there exist significant differences in mean values of the dependent variable across the research subgroups. It is recognized that the application of ANOVA to this data may have been appropriate, although this process is recommended to be used usually when the independent variable is categorical in nature (Lund & Lund, 2016). In this current study, had dyslexia-ness been categorized into 'high', 'moderate' or 'low', or indeed sub-gradations of these, it can be seen that ANOVA may have been an appropriate statistical test to use (Moore & McCabe, 1999). However, it was felt that the relatively simpler Student's t-test would be a better choice for determining whether there exists or not, significant differences in (population) mean values of ABC where the continuously-valued Dyslexia Index is used as the independent variable.

 

In this way, a matrix of t-test outcome-pairs could be constructed which would identify significant differences not only between levels of ABC for the three research subgroups, but also at a factorial level both of ABC and of Dyslexia Index following a principal component analysis of both variables. It is recognized that the t-statistic used in the t-test forms the basis of ANOVA in any case where the required F-statistic in ANOVA is exactly equal to t2(squared). It is possible that this analysis decision may be reconsidered perhaps as a recommended project development by redefining Dyslexia Index as a categorical variable and establishing clear, categorical boundaries containing ranges of dyslexia-ness that could be assigned such categories as 'low', 'low-to-moderate' ... etc. In this way, an ANOVA would then be an appropriate statistical analysis to perform.

Effect sizes

 

Effect size challenges the traditional convention that the p-value is the most important data analysis outcome response to determine whether an observed effect is real or can be attributed to chance events (Maher et al., 2013). The use of effect size as a method for reporting statistically important analysis outcomes is gaining traction in education, social science and psychology research (Rollins, et al., 2019), not least in studies about dyslexia, where it is claimed to be a vital statistic for quantifying intervention outcomes designed to assist struggling readers (ibid).

 

Effect size values are a measure of either the magnitude of associations or the magnitude of differences, depending on the nature of the data sets being analysed. Effect size is easy to calculate, indeed the simplest result is the absolute difference between the means of two independent groups' data sets. Cohen’s ‘d’ is an improved measure, derived by dividing this result by the standard deviation of either group (Cohen, 1988) and is commonly used (Thalheimer, 2002).

 

Effect size is useful as a measure of the between-groups difference between means, particularly when measurements have no intrinsic meaning, as is often the case with data generated from Likert-style scales (Sullivan & Feinn, 2012, p279).  Hence at an early stage of planning the data analysis process, effect size measures were chosen to be the main data analysis outcomes although in preference to Cohen’s ‘d’, the alternative effect size measure of Hedges' ‘g’ was used because this measure takes better account of the sample sizes of the relative distributions by using a 'pooled' (that is, a weighted) standard deviation in the effect size calculation (Cumming, 2010). This is especially appropriate when the sample sizes are notably different, which is the case in this project.

Principal Component Analysis

 

The process of Principal Component Analysis (PCA) performs dimensionality reduction on a set of data, and especially a scale that is attempting to evaluate a construct. The point of this process is to see if a multi-item scale can be reduced into a simple structure with fewer components (Kline, 1994).

 

As a useful precedent, Sander and Sanders (2003) recognized that dimension reduction may be appropriate and had conducted a factor analysis of their original, 24-item ABC Scale which generated a 6-factor structure, the components of which were designated as Grades, Studying, Verbalizing, Attendance, Understanding, and Requesting. Their later analysis of the factor structure found that it could be reduced into a 17-item scale with 4 factors, which were designated as Grades, Verbalizing, Studying and Attendance (Sander & Sanders, 2009). The reduced, 17-item ABC Scale merely discounts 7 dimensions from the original 24-item scale which is otherwise unamended, and so in this project it was considered appropriate to deploy the full, 24-item scale to generate an overall mean ABC value in the analysis so that an alternative 17-item overall mean ABC value could also be calculated to examine how this may impact on the outcomes.

 

But much like Cronbach's 'alpha' as a measure of internal consistency, factor analysis is ascribable to the dataset onto which it is applied and hence, the factor analysis that Sander and Sanders (ibid) used and which generated their reduced item scale with four factors was derived from analysis of the collated datasets they had available from previous work with ABC, sizeable though this became (n=865). It was considered therefore that the factor structure that their analysis suggested may not necessarily be entirely applicable more generally and without modification or local analysis, despite being widely used by other researchers in one form (ABC24-6) or another (ABC17-4) (e.g.: de la Fuente et al., 2013; de la Fuente et al., 2014; Hilale & Alexander, 2009; Ochoa et al., 2012; Willis, 2010; Keinhuis et al., 2011; Lynch & Webber, 2011; Shaukat & Bashir, 2016). Indeed when reviewing the ABC Scale, Stankov et al. (in Boyle et al., 2015) implied that more work should be done on consolidating some aspects of the ABC Scale, not so much by levelling criticism at its construction or theoretical underpinnings but more so to suggest that as a relatively new measure (> 2003) it would benefit from wider applications in the field and subsequent scrutiny about how it is built and what it is attempting to measure.

 

Hence conducting a factor analysis of the data collected in this project using the original 24-item ABC Scale is worthwhile because it may reveal an alternative factor structure that fits the context of this enquiry more appropriately.

Andrew Dykes B.Ed., M.A., M.Sc., CELTA, FHEA

Academic confidence and dyslexia at university

A PhD Research Project October 2014 - May 2019

Middlesex University, London

Andrew Dykes B.Ed, M.A, M.Sc, FHEA

ad1281@live.mdx.ac.uk; academic@ad1281.uk

+44 (0)79 26 17 20 26