Academic confidence and dyslexia at university

Research

 

Section 3 revised March-May 2020

Research Design - Methodology and Methods

 

3.1 Research Design Overview

The research aim was twofold: firstly, to establish the extent of all participants' 'dyslexia-ness', this to be the independent variable; secondly, to gauge their academic confidence in relation to their studies at university, the dependent variable, so that associations between the variables could be explored. This section describes the strategic and practical processes of the project that were planned and actioned to meet tha research aim. Details are provided about how practical processes have been designed and developed to enable appropriate data sources to be identified; how the research participants were identified and contacted; how data has been collected, collated and analysed so that the research questions can be properly addressed. This section of the thesis also sets out the development of the Dyslexia Index Profiler, the metric designed and developed exclusively for this project to gauge dyslexia-ness (sub-section 3.3.III(2), below). The rationales for research design decisions are set out and justified, and where the direction of the project has diverted from the initial aims and objectives, the reasons for these changes are justified.

Design Focus

The project has taken an explorative, mixed methods design focus (see Section 1.2(II)). This is because little is known about the interrelationships between academic confidence and dyslexia. Hence, no earlier model has been available to provide guidance. Data were collected through a self-report questionnaire and were mainly quantitative. Likert scale item responses were transformed into numerical data for analysis. Some qualitative data were also collected through a free-writing area in the questionnaire for a 'softer' exploration about participants' more general feelings and attitudes to studying at university. In this way, hypotheses formalized from the research questions were addressed objectively using the outputs from the statistical analysis of the quantitative data, with qualitative data used to elaborate discussion points later. This is reported fully In Section 4, Results and Analysis.

 

3.2      Research participants: Groups and Subgroups

The participants were all students at university and no selective nor stratified sampling protocols were used in relation to gender, academic study level or study status - that is, whether an individual was a home or overseas student. However, all three of these parameters were recorded for each respondent, and these data have been used throughout the analysis and discussion when considered apposite. It is possible that a later study may re-visit the data to explore differences that might emerge through stratified analysis.

The objective was to establish a sizeable research datapool through convenience sampling that comprised two groups: the first was to be as good a cross-section of HE students as may be returned through voluntary participation in the project. Participants in this group were recruited through advertisements posted on Middlesex University’s student-facing webpages during the academic year 2015-16. The second group was to be students known to have dyslexic learning differences. These were recruited through the University’s Dyslexia and Disability Service student e-mail distribution list. Recruitment was incentivized by offering participants an opportunity to enter a prize draw subsequent to completing the questionnaire. Amazon vouchers were offered as prizes. From the group of non-dyslexic students, it was hoped that a subgroup of students presenting quasi-dyslexia could be identified. It was of no consequence that students with dyslexia may have found their way to the questionnaire through the links from the intranet rather than as a response to the Disability and Dyslexia Service's e-mail, because the questionnaire requested participants to declare any dyslexic learning challenges. Hence, participants would be assigned into the appropriate research group from either recruitment process.

 

Thus, two distinct datasets were established:

  • Students with known dyslexia - designated Research Group DI;

  • Students with no known dyslexia - designated Research Group ND;

 

Through the data collation process, a sub-group of students was established from non-dyslexic group, being those who presented quasi-dyslexia, as identified by the Dyslexia Index Profiler. This dataset was designated Research Group DNI.

Hence, it was possible to compare levels of academic confidence between the three groups.

 

3.3  Data Collection

I - Objectives

As this project was focused on finding out more about the academic confidence of university students and relating this to levels of dyslexia-ness, the data collection objectives were:

  • to design and build a data collection instrument that could gather information about academic confidence and aspects of dyslexia-ness, expediently and unobtrusively from a range of university students, in information formats that could easily be collated and statistically analysed once acquired;

  • to ensure that the data collection instrument was as clear, accessible and easy-to-use as possible, noting that many respondents would be dyslexic;

  • to ensure that the data collection instrument could acquire information quickly (15 minutes was considered as the target) to maintain research participant interest and attention;

  • to design an instrument that could be administered online for participants to engage with at their convenience;

  • to enable participants to feel part of a research project rather than its subjects, and hence engage with it and provide honest responses;

  • to maximize response rates and minimize selection bias for the target audience;

  • to ensure compliance with all ethical and other research protocols and conventions for data collection according to guidelines and regulations specified by the researcher's home university.

 

These objectives were met by designing and building a self-report questionnaire. Carefully constructed survey questionnaires are widely used to collect data on individuals' feelings and attitudes that can be easily numericized to enable statistical analysis (Rattray & Jones, 2007). Questionnaires are one of the most commonly used processes for collecting information in educational contexts (Colosi, 2006). This rationale falls within the scope of survey research methodology in which the process of asking participants questions about the issues being explored are a practical and expedient process of data collection, especially where more controlled experimental processes such as might be conducted in a laboratory, or other methods of observing behaviour are not feasible (Loftus et al., 1985). Evidence shows that self-report questionnaires have been found to provide reliable data in dyslexia research (e.g.: Tamboer et al., 2014; Snowling et al., 2012). Developments in web-browser technologies and electronic survey creation techniques have led to the widespread adoption of questionnaires that can be delivered electronically across the internet (Ritter & Sue, 2007) and so this process was used. The ability to reach a complete community of potential participants through the precise placement and marketing of a web-based questionnaire was felt to have significant benefits. These included:

  • the ability for the researcher to remain inert in the data collection process to reduce any researcher-induced bias;

  • the ability for respondents to complete the questionnaire privately, at their own convenience and without interruption, which it was hoped would lead to responses that were honest and accurate;

  • ease of placement and reach, achieved through the deployment of a weblink to the questionnaire on the home university's website;

  • ease of data receipt using the standard design feature in online surveys of a 'submit' button to generate a dataset of the responses in tabular form for each participant, automatically sent by e-mail to the researcher's university mail account;

  • the ability to ensure that participant consent had been obtained by linking agreement to this to access to the questionnaire;

  • the facility for strict confidentiality protocols to be applied whereby a participant's data, once submitted, were to be anonymous and not attributable to the participant by any means.

 

Every questionnaire response received was anonymised at the submission point with a randomly generated 8-figure Questionnaire Response Identifier (QRI). The QRI was automatically added to the response dataset by the post-action process for submitting the form as an e-mail. Should any participant subsequently request revocation of data submitted, this was achieved by including the QRI in the revocation request form, also submitted electronically and received anonymously. In fact, no participants requested this.

II - Questionnaire design rationales

The questionnaire was designed to be as clear and as brief as possible. Notably, guidance provided by the British Dyslexia Association was helpful in meeting many of the design objectives. Additional literature was consulted about designing accessible online and web-based information systems,with paricular attention to text formats and web design for visually impaired and dyslexic readers, to ensure dyslexia-compliant readability (Gregor & Dickinson, 2007; Kurniawan & Conroy, 2007; Al-Wabil et al., 2007; Beacham & Alty, 2006; Evett & Brown, 2005).

Secondly, a range of later research was consulted to explore how dyslexia-friendly online webpage design may have been reviewed and updated in the light of the substantial,relatively recent expansion of online learning initiatives. These have developed within HE institutions through virtual learning environments (VLEs) and digital learning object platforms such as Xerte (Xerte Community, 2015) and Articulate (Omniplex Group, 2018); and from external sources such as MOOCs and free-course providers such as FutureLearn (Open University, 2018), all of which rely on modern web-browser functionality (Rello et al., 2012; Chen et al., 2016; Berget et al., 2016).

 

Additionally, literature was consulted to understand how the latest HTML5 web technologies and the rapid rise in smart mobile device use were influencing universal web design (Riley-Huff, 2012; 2015; Henry et al., 2014; Fogli et al., 2014; Baker, 2014). It was apparent that online information presentations which enshrined strong accessibility protocols not only enabled better access for those with dyslexia, or who experienced visual stress or other vision differences, but provided better accessibility and more straightforward functionality for everyone (McCarthy & Swierenga, 2010; Rello et. al, 2012; de Santana et.al., 2013). Other literature was consulted for guidance about the impact of design and response formats on data quality (Maloshonok & Terentev, 2016), on response and completion rates (Fan & Yan, 2010), on the effectiveness of prize draw incentivizations (Sanchez-Fernandez et al., 2012) and invitation design (Kaplowitz et al., 2011), and about web form design characteristics recommended for effectiveness and accessibility (Baatard, 2012).

 

Part of the questionnaire design stage included a review of existing web survey applications to determine if any provided sufficiently flexible design customizability to meet the design specifications that had been scoped out. Applications (apps) reviewed were Google Forms (Google, 2016), SurveyMonkey (Survey Monkey, 2016), SurveyLegend (Survey Legend, 2016), Polldaddy (Automattic, 2016), Survey Planet (Survey Plant, 2016), Survey Nuts (Zapier Inc., 2016), Zoho Survey (Zoho Corp., 2016) and Survey Gizmo (Widgix, 2016). It was found that the limitations of these proprietary survey design apps were numerous and broadly similar, such as: limited number of respondents per survey; strictly constrained design and functionality options; advertising, or custom branding. These features were only removable by subscribing to payment plans. None of the apps reviewed included the functionality of range input sliders.

 

Hence the project questionnaire was designed according to these design rationales:

  • it was an online questionnaire that rendered properly in at least the four most popular web-browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Safari (usage popularity respectively 69.9%, 17.8%, 6.1%, 3.6%, data for March 2016 (w3schools.com, 2016)). Advice was provided in the questionnaire pre-amble that these were the best web-browsers for viewing and interacting with the questionnaire; links were provided for downloading the latest versions of the two most popular browsers;

  • text, fonts and colours were carefully chosen to ensure that the questionnaire was attractive to view and easy to engage with, meeting W3C Web Accessibility Initiative Guidelines (W3C WAI, 2016);

  • an estimate was provided about completion time (15 minutes);

  • questions were grouped into five, short sections, each focusing on a specific aspect of the research, with each question-group viewable one section at a time. This was to attempt to reduce survey fatigue and poor completion rates (McPeake et al., 2014; Ganassali, 2008; Flowerdew & Martin, 2008; Marcus et al., 2007; Cohen & Manion, 1994); In the event, only 17 of the 183 questionnaires returned were incomplete (9.2%).

  • the substantial part of the questionnaire used Likert-style items in groups, presenting response options using range sliders to gauge agreement with statements;

  • the questionnaire scale item statements were written as neutrally as possible, or in instances where this was difficult to phrase, a blend of negative and positive phrasing was used (e.g.: Sudman & Bradburn 1982). This was an attempt to avoid tacitly suggesting that the questionnaire was evaluating the impacts of learning difficulty, disability or other learning challenge on studying at university, but rather that a balanced approach was being used to explore a range of study strengths as well as challenges. Account was taken of evidence that wording polarity may influence respondents' answers to individual questions with 'no' being a more likely response to negative questions than 'yes' is, to positively worded ones (Kamoen et al., 2013); but that the widely claimed supposition that survey items worded negatively as an attempt to encourage respondents to be more attendant to them, or that mixing item polarity may be confusing to respondents, claimed through internal reliability analysis, was dubious at best (Barnette, 2000). Hence applying scale item statement neutrality where possible, was considered as the safest approach for minimizing bias;

  • a free-writing field was included to encourage participants to feel engaged with the research by providing an opportunity to make further comments about their studies at university in whatever form they wished. This had proved to be a popular feature in the preceding dissertation questionnaire (Dykes, 2008), providing rich, qualitative data;

  • on submission, an acknowledgement (receipt) page was triggered, a copy of the responses submitted could be inspected, and revocation of the data could be requested if desired;

The questionnaire was built, tested and published on the project webpages which had been established and hosted on the researcher's private web server, not least as this presented the most expedient means to retain complete control over both the content and security of the webpages. The questionnaire remains available here.

 

 

III - Questionnaire components

  

  

 

 

After a short foreword about the purpose of the project, instructions for completing and submitting the questionnaire were were detailed, together with advice about which web-browsers would provide the best interactive experience. Download links to these browsers were also provided.

 

The questionnaire comprised three main sections: The first presented demographic data fields that all participants were to complete. The second section comprised quantitative data collection fields to explore academic confidence and dyslexia-ness. The final section collected qualitative data. Closing statements at the end of the questionnaire included the 'submit' button.

1. Demographic data

 

Data were collected on gender, student domicile ('home' or 'overseas') and student study level, with options provided from Foundation Level 3/4 to post-doctoral researcher Level 8 (QAA, 2014). This preliminary section also asked students with dyslexia how they learned of their dyslexia by selecting options from two drop-down menus to complete a sentence (Fig. 8)., thus collecting data to address Hypothesis 3 (see sub-section 1.4).

 

 

 

 

 

 

 

 

 
 
2. Quantitative data

 

This section comprised firstly the ABC Scale, presented verbatim (Sander & Sanders, 2006b, 2009; see Appendix 8.2(II)). In the final iteration of the questionnaire, this was followed by a set of 60 statements designed to gauge dyslexia-ness which was in two parts although these were not visibly separated. The first part comprised 36 statements being six psychometric subscales of six statements each, attempting to gauge respectively elements of the constructs of Learning Related Emotions; Anxiety Regulation and Motivation; Self-Efficacy; Self-Esteem; Learned Helplessness; and Academic Procrastination. This part was a legacy of the research conducted in the earlier, MSc. dissertation (Dykes, 2008) (below, Section 1, Part 1). The second part comprised the 24-statement Dyslexia Index Profiler, developed following a critical appraisal of the suitability of the psychometric scales for meeting the research objectives of the study, and subsequently used as the principal discriminator to identify the subgroup of quasi-dyslexic students (below, Section 1, part 2).

Likert Scales

Likert-style scales were used to collect quantitative data throughout the questionnaire.  Participants reported their degree of agreement with each scale-item statement using a continuous response scale approach, developed for this project in preference to traditional, fixed anchor point scale items. When conventional, fixed anchor points are used - commonly 5- or 7-points - the data produced are numerically coded so that they can be statistically analysed. There is some debate about whether data coded in this way justifies parametric analysis because the coding process assigns arbitrary numerical values to non-numerical data collection responses - usually ranging from 'strongly disagree' to 'strongly agree' - hence generating a discrete variable. This is generally an essential first stage of the data analysis process, but one which then makes the data neither authentic nor actual (Carifio & Perla, 2007; Carifio & Perla 2008; Ladd, 2009). However, the issue about conducting parametric analysis on data generated from Likert-style scales remains controversial, aggravated by a tendency amongst researchers to be unable to clearly demonstrate their understanding of the differences between Likert-style scales and Likert-style scale items (Brown, 2011), compounded by not properly clarifying whether their scales are gauging nominal, ordinal, or interval (i.e. continuous) variables.

 

The incorporation of continuous rating scales, often referred to as visual analogue scales, in online survey research is relatively new but becoming more widespread, not least because the process is now becoming easier to implement in web-survey designs. Hence the effects of such innovations on data quality and participant responses are also beginning to attract research interest (Treiblmaier & Flizmoser, 2011), which, for example, suggests that using input-range sliders can increase data quality (Funke & Reips, 2012). Hence by using range sliders, data collected would be as close to continuous as possible, thus enabling parametric analysis to be reasonably conducted. (Jamieson, 2004; Pell, 2005; Carifio & Perla, 2007; 2008; Grace-Martin, 2008; Ladd, 2009; Norman, 2010; Murray, 2013, Mirciouiu & Atkinson, 2017). In this questionnaire, the continuous scales were set as percentage agreement, ranging from 0% to 100%, hence corresponding to participants strongly disagreeing, to strongly agreeing respectively, with each statement.

 

Section 1: The Academic Behavioural Confidence Scale:

 

Academic confidence was assessed using the existing, ABC Scale (Sanders, 2006b). There is a range of research that has found this to be a good evaluator of the academic confidence of university-level students by examining their study behaviours and actions (see Section 2.2). Currently, no other metrics exist which explicitly focus on gauging confidence in academic settings (Boyle et al., 2015). Evaluators do exist for measuring self-efficacy or academic self-efficacy, which is considered to be the umbrella construct that includes academic confidence (Sander & Sanders, 2003). However, of all such measures none matched the research objectives of this study more appropriately than the ABC Scale. The 5-anchor-point Likert responders in the published ABC Scale were replaced with continuous range sliders so that parametric statistical analysis could be conducted on the data collected.

Following precedents set (Sander & Sanders, 2006a, 2009), principal component analysis (PCA) applied dimensionality reduction to explore the factor structure of the ABC Scale for data collected in this study. This was to determine whether a cross-factorial analysis might be conducted with outputs from the Dyslexia Index Profiler, similarly examined using PCA, to explore more nuanced factorial-level associations (reported in sub-section 4.#), but also to comment on the generalizability of the factor structure of the ABC Scale suggested in the Sander & Sanders studies. Of many studies found which use the ABC Scale as their principal metric, only one has been revealed which applied a study-specific factor analysis to the data collected in lieu of applying the existing, published factor structure of the Scale (in Corkery et.al, 2011).

 

Section 2, Part 1: Six Psychometric Scales:

 

The data collection process of the earlier, MSc dissertation (Dykes, 2008) had developed psychometric scales where the purpose was to explore feelings and attitudes of dyslexic students to their dyslexia in the context of their university studies. In the early stage of the research design process for this current study, it was planned that these six subscales would be combined into a profile chart that would have sufficient discriminative power to enable quasi-dyslexic students to be identified from the group of non-dyslexic students.  This would be achieved though visual discrimination between the radar chart profiles generated from the 6 subscales for each respondent when this was overlaid onto the profile charts generated from the mean average data for students in the dyslexic group and the non-dyslexic group respectively (Fig 9).

 

The rationale was based on evidence from literature which suggesed that discernible differences exist between dyslexic and non-dyslexic individuals for each of these six constructs. For example, levels of self-esteem are depressed in dyslexic individuals in comparison to their non-dyslexic peers (e.g.: Riddick et al., 1999; Humphrey, 2002; Burton, 2004; Alexander-Passe, 2006; Terras et al., 2009; Glazzard, 2010; Nalavany et al., 2013). Furthermore, Humphrey and Mullins (2002) looked at several factors that influenced the ways in which dyslexic children perceived themselves as learners, identifying learned helplessness as a significant characteristic; and a study by Klassen et al. (2008) compared levels of procrastination between students with and without dyslexia finding that dyslexic students exhibit significantly higher levels of procrastination when tackling their academic studies at university in comparison to students with no indication of dyslexia. Extensive literature exists supporting the impact of dyslexia on the remaining three constructs.

 

To develop the charts for use in this current project, pseudo-data were generated to simulate mean outputs for typically dyslexic, and typically non-dyslexic individuals, based on stereotypical rationales built from personal, practitioner experience of working with students with dyslexia at university together with evidence from the previous study. A known non-dyslexic individual was then used to generate a sample respondent profile to overlay onto the simulated mean profiles. The aim was to determine whether a 'by eye' judgement for spotting profile anomalies would have sufficient discriminative power for the process to reliably identify quasi-dyslexic students from the non-dyslexic group later. Although the development of a more data-analysis-based criteria was considered, a reliable process appeared likely to be quite convoluted and probably unworkable, and as nothing similar was found in existing literature, no other guidance was available to consult. Even so, the resulting, overlapping visualizations were distinct (Fig. 9, generated from observed data collected later from the quasi-dyslexic subgroup), but it was considered doubtful that the complete set of profiles would show sufficiently discernible differences to be accurate enough for use as a discriminating tool. Hence this approach was abandoned in lieu of developing an alternative, quantitative process as the discriminator between dyslexic, non-dyslexic, and quasi-dyslexic students, which emerged as the Dyslexia Index Profiler (below, Section 2, Part 2). Nevertheless, the profile chart visualizations were intriguing, suggesting that this data may have value, and so this section of the questionnaire was not deleted. The complete set of profile charts was constructed after data had been collated and has been reserved so that the idea may be explored and reported later, perhaps as part of a subsequent study.

Figure 8: Selecting how dyslexic students learned of their dyslexia

Figure 9: The profile chart for a respondent in the quasi-dyslexic sub-group 

 

Section 2, Part 2: The Dyslexia Index Profiler

 

After abandoning the profile charts idea, it was necessary to establish a fresh, robust mechanism to identify quasi-dyslexic students from the non-dyslexic group without overtly screening for dyslexia. The entire project hinged on finding out whether levels of academic confidence are influenced differently by dyslexia, quasi-dyslexia, and non-dyslexia, with the aim of suggesting that differences that emerge may be at least partly attributable to the dyslexic label. Thus, developing the Dyslexia Index (Dx) Profiler became a major component of the research design process.

 

This sub-section sets out firstly the background to the development of the Dx Profiler, followed by a description of the methods and processes devised, concluding with the results and outcomes, a short discussion, and details about how the metric was used to calculate each participant's Dyslexia Index, illustrated with an example drawn from the research data.

 

I. Background and rationale

Development of the Dx Profiler has been a complex process that was grounded on pertinent theory about the broad and multifactorial nature of dyslexia (discussed in Section 2.1). To have used a proprietary dyslexia screener would have raised ethical challenges related to disclosure for participants in the non-dyslexic group, hence compromising the requirement for data collection anonymity. Stated use of a screener may also have introduced bias where respondents who were not (identified as) dyslexic may have answered some parts of the questionnaire untruthfully through fear of being identified as dyslexic. Such fear is widely reported, in particular, amongst health professionals (e.g.: Shaw & Anderson, 2018; Evans, 2014; Ridley, 2011; Morris & Turnbill, 2007; Illingworth, 2005).

 

A metric was required for gauging dyslexia-ness by viewing university study attributes and behaviours through the lens of dyslexia, but which was not designed to be a dyslexia screener. It has been stated previously that many students with dyslexia at university may have developed strategies to compensate for literacy-based difficulties experienced in earlier learning histories, partly by virtue of their academic capabilities (see Section 2). Hence in HE contexts, it can be other aspects of the dyslexic self which impact significantly on academic study. Thus, to consider dyslexia to be only a literacy issue, or to focus on cognitive aspects such as working memory and processing speeds, may be erroneous (Cameron, 2015). Procedures which enable effective self-managed learning strategies to be developed need to be considered (Mortimore & Crozier, 2006), especially as self-regulated learning processes are recognized as a significant feature of university learning experiences (Zimmerman & Schunk, 2011; Broadbent & Poon, 2015). Hence these themes were to underpin the Dx Profiler design process.

 

The Dyslexia-ness Continuum

The broad definition of dyslexia outlined by the BDA acknowledges much of this wider discourse about the nature and aetiology of the syndrome, discussed previously (sub-section 2.1). Critically, this definition frames dyslexia as a continuum, which firstly acknowledges that categorical distinctions within the syndrome are problematic; but also suggests that no clear-cut point along this continuum can be universally fixed to indicate the boundary between dyslexic, and non-dyslexic individuals. This is despite the desire to do so, not least to enable decisions to be made concerning the award of financial learning support allowances for students at UK universities.

Adopting the continuum approach aligns neatly with the concept of 'dyslexia-ness', introduced for this current study. Thus, it is reasonable to infer that the characteristics and attributes of dyslexia that are embraced within the (BDA) definition, and which are the components of dyslexia-ness, might be measured in some way once distilled back into dimensions. This opens avenues for exploring either dimensions unilaterally, or groups of dimensions (perhaps combined into factors), or the complete portfolio of dimensions - that is, dyslexia-ness. According to students' levels of dyslexia-ness it will be possible to locate quasi-dyslexic and non-dyslexic individuals at some point along the continuum relative to their more dyslexic peers, or sift individuals who share similar levels of dyslexia-ness into sub-groups to enable other comparisons to be made. Hence, The Dyslexia-ness Continuum is established (Fig. #) as a continuous, independent variable against which other study attributes - academic confidence in this current study - can be examined as the corresponding dependent variable. In this way, tentative comparisons might then be made between groups and sub-groups of individuals, in this case, students at university, naturally leading to a mechanism for deducing more generalized results.

 

The idea of a dyslexia-ness continuum warrants further development, not least as a mechanism to characterize learning development attributes, skills and behaviours that has meaning and relevance in higher education contexts, especially as a driver for redesigning curriculum delivery to be more flexible, adaptable and inclusive. Whilst this is not to ignore or dismiss the idea of dyslexia per se, such a process might help to relocate it more positively within a multifactorial portfolio of learning and study attributes that could also reduce much of the stigmatization associated with 'difference' in learning contexts (Osterholm, et.al., 2007; Ho, 2004; Riddick, 2000). 

[Move to Section 5?]

 

 

To operationalize The Dyslexia-ness Continuum through the Dx Profiler so that each partipant's Dyslexia Index would generate the continuum locator, these design criteria were established:

  • the profiler was to be a self-report tool requiring no administrative supervision;

  • the profiler was to be ethically non-controversial, not labelled as a dyslexia screener, and with data collected anonymously;

  • the profiler item statements were to be as applicable to non-dyslexic as to dyslexic students;

  • it would include a balance of literacy-related and wider, academic learning-management and study-behaviour evaluators;

  • it would include elements of learning biography;

  • although Likert-style based, scale item statements were to avoid fixed anchor points by presenting respondent selectors as a continuous range option;

  • scale item statements would aim to minimize response distortions potentially induced by negative affectivity bias (Brief, et al., 1988);

  • scale item statements would aim to minimize respondent auto-acquiescence, that Is, 'yea-saying', being the often-problematic tendency to respond positively to attitude statements (Paulhaus, 1991). Thus, the response indicator design would require a fine gradation of level-judgment to be applied;

  • although not specifically designed into the suite of scale-item statements at the outset - which were presented in a random order - natural groupings of statements as sub-scales were expected to emerge, leading to the possibility for factor analysis to be applied later, if appropriate;

  • scale item statements were to avoid social desirability bias, that is, the tendency of respondents to self-report positively, either deliberately or unconsciously. In particular, an overall neutrality should be established for the complete Dx Profiler so that it would be difficult for participants to guess how to respond to present themselves in a favourable light (Furnham & Henderson, 1982).

 

II. Designing the Dx Profiler

In addition to being grounded in the most recent BDA definition of dyslexia, several other evaluators were consulted for guidance. In particular: the BDA's Adult Checklist developed by Smythe and Everatt (2001); the original Adult Dyslexia Checklist proposed by Vinegrad (1994), upon which many subsequent checklists appear to be based; and the later, York Adult Assessment (YAA) (Warmington et al., 2012) which has a specific focus as a screening tool for dyslexia in adults, were all explored. Despite the limitations outlined earlier (sub-section 2.1(VII)), the YAA was found to be usefully informative. But also consulted and adapted has been the 'Myself as a Learner Scale' (Burden, 2000); the useful comparison of referral items used in screening tests which formed part of a wider research review of dyslexia by Rice and Brooks (2004); and especially more recent work by Tamboer and Vorst (2015) where both their own self-report inventory of dyslexia for students at university, and their useful overview of previous studies were consulted.

 

Drawing from all of these sources, and from supporting literature, a portfolio of 20 statements was devised for gauging attributes of study behaviours and learning biography that are known to present characteristic differences between dyslexic and non-dyslexic students, thus setting out the framework for the Dx Profiler (Table #1).

Figure #: The Dyslexia-ness Continuum - displaying data from this current study

 
 
 
 
 
 
 
 
 
 
 

Table #1: Dx Profiler statements, dyslexia attributes, and supporting references

* [...] refers to sub-sections in this thesis where this reference is used to support the discussion point

The Profiler was to be aligned with the BDA (2018) definition of dyslexia, as adopted for this current study, (see Section 2.1(I)), and this definition was distilled into three components: language and literacy skills; thinking and processing skills (encompassing issues related to working/short-term memory, but also to include creative strengths); and organization and time-management competencies. The statements in the Profiler were located across the three components accordingly, setting out a framework that might be validated from dimension reduction analysis of observed data acquired from participants in this study later (see sub-section 4.#), given that this is a newly devised metric. Later analysis would also enable the sensitivity of the Profiler to be assessed by noting how consistently it verified previously identified dyslexic students as dyslexic. Although this might also lead to some confidence about the Profiler's specificity, without applying more conventional dyslexia screeners to non-dyslexic students in the datapool, such assessments could only be tentative.

COMPONENT: Literacy and language

  • accurate and fluent word reading and spelling;

  • phonological awareness;

  • [other] aspects of language (eg: writing coherence);

  • visual processing challenges;

Dimension:

'When I was learning to read at school, I often felt I was slower than others in my class'

'My spelling is generally good'

'In my writing, I frequently use the wrong word for my intended meaning'

'When I'm reading, I sometimes read the same line again or miss out a line altogether'

'I have difficulty putting my writing ideas into a sensible order'

'In my writing at school I often mixed up similar letters, like 'b' and 'd' or 'p' and 'q''

'My tutors often tell me that my essays or assignments are confusing to read'

'I get really anxious if I'm asked to read 'out loud''

COMPONENT: Thinking, processing, memory

  • verbal memory;

  • verbal processing speed;

  • mental calculation;

  • concentration;

  • information synthesis;

  • design, problem-solving ingenuity, creativity;

Dimension:

'I can explain things to people much more easily verbally than in my writing'

'I get in a muddle when I'm searching for learning resources or information'

'I'm hopeless at remembering things like telephone numbers'

'I find following directions to get to places quite straightforward'

'I prefer looking at the 'big picture' rather than focusing on the details'

'My friends say I often think in unusual or creative ways to solve problems'

'I find it really challenging to follow a list of instructions'

'I get my 'lefts and 'rights' easily mixed up'

COMPONENT: Organization and time management

  • personal organization;

Dimension:

'I find it very challenging to manage my time efficiently'

'I think I am a highly organized learner'

'I generally remember appointments and arrive on time'

'When I'm planning my work, I use diagrams or mindmaps rather than lists or bullet points'

The multifactorial nature of the syndrome implies that attributes are presented in varying degrees in each individual, and that some of the attributes devised are not likely to be uniquely located into any single component. For example, it is reasonable to suppose that the statement 'I get in a muddle when I'm searching for learning resources or information' may be variably influenced by criteria from the skillsets of all three components. How this variability may appear was unknown at the design stage of the Dx Profiler due to the unique, individual distribution of attributes across factors. Nevertheless, a draft of a possible typical, mean-average mapping was constructed as a starting point (Fig #10) which would be compared later with the output derived from the dimsnion reduction analysis of observed data, where attribute-factor overlap would be determined by relative factor loadings (see Fig #, sub-section 4.#).

Fig #10: Dyslexia dimensions distributed across BDA components

III. Validating the Dx Profiler

Before deploying the Dx Profiler as part of the research questionnaire, two further factors were considered pertinent: firstly, it was important to gain a tentative confirmation that the statements devised resonated with the learning and study experiences of students at university, and hence were likely to be a realistic attempt to gauge the levels of dyslexia-ness of participants in this current project; and secondly, that a reasonable estimate of the prevalence of each dimension so that the overall output of the Dx Profiler would be generated from a weighted rather than a simple mean-average of scores obtained from the complete set of 20 dimensions. It was reasonable to suppose that were prevalence data ignored, outputs from the Profiler would be less realistic, if not significantly skewed.

As the Profiler was to be a metric for use in university settings, feedback on both points was sought from dyslexia support professionals operating in that environment, ahead of finalizing the Dx Profiler and incorporating it into the main research questionaire. It seemed reasonable to assume that these members of university support services staff will have day-to-day interactions with dyslexic students at university, and hence are likely to have a good sense of the prevalence of the dimensions of dyslexia enshrined in the statements. Hence, a small-scale enquiry was devised, being a brief online poll designed, built and hosted on the project's webpages.

Rationale, Methods and Processes

The rationale for the enquiry was threefold:

  • By exploring the prevalence of attributes (dimensions) of dyslexia observed in the field in addition to those distilled through the theory and literature reviewed to that point, it was hoped that the data acquired would confirm that the dimensions being gauged were appropriate and recognizable features of the learning and study profiles of dyslexic students at university;

  • Through analysis of the data collected, value weightings could be ascribed to each dimension based on their reported prevalence. Hence the output of the Dx Profiler in the main research questionnaire could account for the likely relative influence of each dimensions by generating a weighted-mean average level of dyslexia-ness for each respondent;

  • Feedback could be sought about the design and operation of the continuous range input sliders (Fig #11) being trialled in this poll, as these were planned to be extensively used in the main questionnaire later. 

The poll contained 18 statements, mirroring those to be used in the Dx Profiler later. The list of statements was prefixed with the question: 'In your interactions with students, to what extent do you encounter each of these dimensions?' Respondents recorded their answer as a percentage where 0% indicated 'never encountered', 50% indicated 'encountered in about half of interactions', and 100% indicated 'all the time' (Fig 11). The default position was set at the midpoint of the slider scale, noting that the default position of input range sliders has been reported to have no significant impact on output (Couper et al., 2006).

 

 

 

 

 

Recruitment of Participants

 

Of the 132 UK Higher Education Institutions identified through the Universities UK database, 116 were identified with Student Support Services that included an indicated provision for students with dyslexia, generally as part of more general services for students with disabilities. These were established through inspection of institutions' outward-facing webpages. Most provided a specific e-mail address for contacting the team of dyslexia specialists directly, or otherwise a more general enquiry address for student services was available. All 116 institutions were contacted by to invite participation in the enquiry by including a link to the poll in the e-mail. The response rate of 30/116 institutions was disappointing, although was considered sufficient for meeting the objectives of the poll.

Process

 

An introduction to the poll described its purpose, provided instructions about how to complete it, and how to request withdrawal of data (revocation) after submission in the event that a participant had a change of heart about taking part. The relationship of the poll to the current study's main research was also stated, as was an offer to share the findings of the poll given that a contact e-mail address was supplied.

 

It was expected that respondents would count a multiple-visit student only once in their estimates of dimension prevalence although to do so was not made explicit so that the poll preamble remained as brief and uncomplicated as possible. Space was provided near the end for participants to submit any comments about either the enquiry itself or about features of the poll. An invitation was also made to submit information about any additional attributes or characteristics of dyslexia-ness that were regularly encountered.

Submitting the completed poll sent the dataset to the researcher's university e-mail account, where it was downloaded into an Excel spreadsheet for collation and analysis.

Results and Outcomes

 

Data received from the poll submissions were collated, and in the first instance the mean average prevalence for each dimension was calculated, derived from the average frequency (that is, extent) that each dimension was encountered (Table #2).

Fig.#11 Continuous range input slider for Dx Dimension 04

 

Table #2: Prevalence of dyslexia dimensions

24 participants reported additional attributes encountered in their work with dyslexic students, and where these were provided, most also included % prevalence:

  • poor confidence in performing routine tasks [reported by 4 respondents with prevalence respectively: 90%; 85%; 80%; % not reported (n/r)]

  • slow reading [100%; 80%; n/r]

  • low self-esteem [85%; 45%]

  • anxiety related to academic achievement [80%; 60%]

  • pronunciation difficulties / pronunciation of unfamiliar vocabulary [75%; 70%]

  • finding the correct word when speaking [75%; 50%]

  • difficulties taking notes and absorbing information simultaneously [75%; n/r]

  • getting ideas from 'in my hear' to 'on the paper' [60%; n/r]

  • trouble concentrating when listening [80%]

  • difficulties proof-reading [80%]

  • difficulties ordering thoughts [75%]

  • difficulties remembering what they wanted to say [75%]

  • poor grasp of a range of academic skills [75%]

  • not being able to keep up with note-taking [75%]

  • getting lost in lectures [75%]

  • remembering what's been read [70%]

  • difficulties choosing the correct word from a spellchecker [60%]

  • meeting deadlines [60%]

  • focusing on detail before looking at the 'big picture' [60%]

  • difficulties writing a sentence that makes sense [50%]

  • handwriting legibility [50%]

  • being highly organized in deference to 'getting things done' [25%]

  • having to re-read several times to understand meaning [n/r]

  • profound lack of awareness of their own academic difficulties [n/r]

 

The additional attribute reported by the most respondents (four) related to confidence, with slow reading being reported by three respondents. Most other additional attributes were reported by only one respondent.

IV. Discussion

Although the response rate for this small-scale poll was disappointing, (30 respondents out of 116 invitations to participate), it was considered that the data collected was sufficient to confirm that appropriate attributes of dyslexia had been selected to resonate with the typical field experiences of dyslexia support professionals, and hence were reasonably representative of the profiles of dyslexic students at UK universities. Although an additional 24 attributes to the 18 provided in the poll were reported, most with a corresponding level of prevalence, the majority of these were reported by only one respondent each, and hence were not considered indicative of a significant omission in the poll design. The additional attribute related to confidence was considered to be accounted for in the Academic Behavioural Confidence Scale, itself forming a major section of the main research questionnaire.

Hence, the 18 dyslexia dimensions were considered to have been validated to a sufficient degree by the outcomes of the poll to form the basis of the Dyslexia Index Profiler. In the first instance, these dimensions were formatted to be more concise; converted into the first person so that participants would feel engaged with the research; and re-phrased where necessary so that the Profiler would be relevant to all students. Secondly, the two additional dimensions relating to learning biography were now included (concerning letter reversal and slow uptake in learning to read). These did not form part of the validation poll as it was assumed that their context would be outside the frame of experience of the dyslexia tutors consulted.

 

Table #3 shows the final iteration of the complete set of 20 dimensions that formed the Dx Profiler, with weightings assigned as derived directly from the prevalence of dimensions established from the poll. The two additional dimensions were both assigned weightings of 0.61, this being the mean average weighting of the other 18 dimensions. This was considered reasonable given that no studies were found that were able to offer evidence of the prevalence of these dimensions in adults with dyslexia. The statements were ordered randomly to reduce the likelihood of order-effect bias. This is an error attributable to the sequence of questions or statements in a survey inducing a question-priming effect, such that a response provided for one statement or question subsequently influences the response for the following question, when these appear to be gauging the same or a similar aspect of the construct under scrutiny (McFarland, 1981).

 

Table #3: Dyslexia-ness dimensions statements and weighting

Generating the Dyslexia Index

Reverse coding

The objective of the Profiler was to generate a numerical output for every student participant - their Dyslexia Index (Dx) - and it was considered appropriate to aggregate the input-values of the Profiler in such a way that a high final Dx value indicated a high level of dyslexia-ness. However, as the Dx Profiler was designed to include a balance of positively and negatively phrased statements (see sub-section 3.2(II)), if dimension-statement values were aggregated without taking account of whether a high or a low value for any particular statement was a marker of a high level of dyslexia-ness, the Dyslexia Index value would be compromised. For example, for Dimension #2: 'My spelling is generally very good', it is reasonable to expect that a strongly dyslexic participant would be likely to disagree with this statement, and hence record a low value for this dimension. Whereas for Dimension #1, relating to slow uptake of early years basic reading skills, the same respondent may be likely to record a high value, indicating strong agreement with the statement. Hence the value outputs for some statements needed to be reverse-coded to ensure that high values on all statements indicated high levels of dyslexia-ness.

As for identifying other dimensions that should be reverse-coded, this was a process that could only be achieved after data had been collected from participants in the research later. Several methods were trialled although after several iterations, the most likely outcomes were established by running a reliability analysis of the complete scale to generate Cronbach's ɑ reliability coefficients. When these outputs were integrated into the dimension reduction techniques later in the data analysis, it was possible to verify that Dimensions #5, and #7 also required data reverse coding. The reliability analysis also identified some dimensions that may be redundant, leading to a reduced scale of 16 dimensions, which is reported below (sub-section 4.#. This aspect of the Dx Profiler requires developmental work and this may form the topic for a later project. However, in this current study, given the caveats mentioned, the process was considered robust enough to enable the outputs from the Profiler to be used.

Calculating Dyslexia Index (Dx)

Table #5 demonstrates the weighted mean calculation of the Dyslexia Index (Dx) using the raw scores (observed values) obtained from a randomly chosen research participant as an example. This student was a female, home, undergraduate who had declared dyslexia. The Dx output was scaled to a value between 0 and 1000 to more easily distinguish it from the participant's ABC value, derived directly from the unscaled, unweighted mean average of their responses to the 24 statements of the ABC Scale, each gauged in the range 0 to 100.

V. Concluding summary

 

In summary, the Dx Profiler calculated a Dyslexia Index for each respondent in the research datapool, being a weighted mean average of responses to 20 Likert-style item statements, where each aimed to capture data relating to a specific study attribute or behaviour, or an aspect of learning biography. Respondents recorded their strength of agreement with each statement along a continuous range from 0% to 100%. Weightings were derived from the prevalence of characteristics determined through a poll of dyslexia support practitioners. The weighted mean was scaled to provide an output, Dyslexia Index (Dx), in the range 0 < Dx < 1000. With data available following deployment of the main research questionnaire, dimensionality reduction was applied (PCA) to explore the factor structure of the Dx Profiler. This was firstly to compare the output with the speculated structure based on the BDA definition of dyslexia, and secondly to determine whether a useful cross-factorial analysis might be conducted with outputs from the ABC Scale. The aim was to explore more thoroughly the associations revealed between academic confidence and dyslexia-ness (reported in sub-section 4.#). This analysis remains tentative and to an extent, speculative, because the size of the sample (n=98) from which it was generated is quite small. A later study could aim to develop the Dx Profiler by collecting data from larger and more varied samples, hence enabling PCA to be more confidently applied.

The outcome of the development process was that the Dx Profiler was considered to have met its design specifications and was used confidently to gauge the dyslexia-ness of the participants in the study.

 

 

3. Qualitative Data

The final part of the questionnaire collected qualitative data in an optional, unlimited free-writing area. Participants were invited to comment on any aspects of the research, the questionnaire, or their learning experiences at university more generally. Including this final section was based on the usefulness of the rich and varied data that had been acquired in a similar way in the questionnaire used in the earlier, MSc. dissertation. In that study, it became evident that providing a conduit for students with dyslexia to provide comments and feedback about how they felt about their study at university was heartily welcomed. The data captured was used to elaborate the discussion element of the dissertation. Hence it was considered that adopting a similar approach in this current study would be of value.

 

4. Questionnaire pilot

The questionnaire was trialled amongst a small group of students (n=10) local to the researcher to gain feedback about its style of presentation, ease of use, the clarity of the questions and statements, the quality of the introduction, the length of time it took to complete, any issues that had arisen in the way it had displayed in the web-browser used, and to elicit any other comments that might indicate that a review or partial review would be necessary before deployment to the target audience. The outcome of this pilot indicated that other than some minor wording changes, no amendments were required.

 
 
 

3.4 Data processing summary

Questionnaire responses were received by e-mail and identified from the data as being either submitted from a student with declared dyslexia or from a student with no declared dyslexia. Subsequently, raw data were transferred into Excel for initial inspection. A Dyslexia Index was calculated for each respondent using the weighted mean average process applied to the 20 scale-items, developed at the design stage of the Dx Profiler in the light of the analysis of the data captured from the validation poll (see sub-section 3.3(2)/III/2 above). Students from the non-dyslexic group whose Dyslexia Index exceeded Dx = 592.5 were categorized as quasi-dyslexic (see sub-section 4.## for the discriminating rationale). Each respondent’s ABC score was calculated using a non-weighted mean average of the 24 scale-item responses which each offered a range from 0 to 100, leading to an output of 0 < ABC < 100. (Fig 9).

The complete datapool was transferred into SPSS v24 (IBM Corp, 2016) for further analysis. .

Fig11DataProcFlow.png

Figure 9:   Data processing flowchart.

Academic confidence and dyslexia at university

A PhD Research Project October 2014 - May 2019

Middlesex University, London

Andrew Dykes B.Ed, M.A, M.Sc, FHEA

ad1281@live.mdx.ac.uk; academic@ad1281.uk

+44 (0)79 26 17 20 26

Figure 8: Selecting how dyslexic students learned of their dyslexia