Academic confidence and dyslexia at university

Research

 

3.1  Research Design

 

Overview

This section describes the blueprint for the strategic and practical processes of the project. These were informed firstly by the previous Master's dissertation, secondly by the relevant literature, and lastly by a strong motivation, as a learning practitioner in HE, to explore how dyslexic students perceive the impact of their dyslexia on their academic confidence.

 

A special focus was to consider how an individual's knowledge and awareness of their dyslexia may be a significant factor that impacts on learning efficacy, not least which compounds the challenges and learning issues that may be attributable to their dyslexia in learning environments that remain steadfastly literacy-based. The research design has attempted to address this by comparing levels of academic confidence between dyslexic students and their quasi-dyslexic peers. In addition to conducting a study in this under-researched area, the rationale has been that the outcomes of the research might contribute to the discourse about how knowledge acquisition, development and creation processes can be transformed in HE in ways that more significantly adopt principles of equity, social justice and universal design (Lancaster, 2008; Passman & Green, 2009; Edyburn, 2010, Cavanagh, 2013). It is argued that adopting this approach could lead to a comprehensive learning solution that may mitigate or even nullify the impact of a dyslexic learning differences at university. This is approach is supported by the argument that dyslexia may now be best considered as an alternative form of information processing (Tamboer et al., 2014) rather than a learning disability (eg: Heinemann et al., 2017; Joseph et al., 2016, amongst numerous other studies).

 

Descriptions are provided about how practical processes have been designed and developed to enable appropriate data sources to be identified, and how data has been collected, collated and analysed so that the research questions can be properly addressed. The rationales for research design decisions are set out and justified, and where the direction of the project has diverted from the initial aims and objectives, the reasons for these changes are described, including the elements of reflective processes that have underpinned project decision-making and re-evaluation of the focus of the enquiry where this has occurred.

I       Design focus – the methodology

This research project has taken an explorative design focus because little is known about the interrelationships between the key parameters being investigated so no earlier model has been available to provide guidance. The main emphasis has been to devise research processes which are able to establish empirical evidence to support previously anecdotally observed features of study behaviour and attitudes to learning amongst the dyslexic student community at university.

The fundamental objective has been to establish a sizeable research datapool that comprised two principal groups: the first was to be as good a cross-section of HE students as may be returned through voluntary participation in the project; the second was to be a group of students known to have dyslexic learning differences. Participants were recruited by means of advertisements posted on Middlesex University’s student-facing webpages during the academic year 2015-16, to recruit students with no formal identification of dyslexia; and through the University’s Dyslexia and Disability Service student e-mail distribution list, to recruit students who had been formally identified with dyslexia. The research aim was twofold: firstly to acquire a sense of all research participants' academic confidence in relation to their studies at university; secondly to establish the extent of all participants' 'dyslexia-ness'.

 

This has been a key aspect of the project design because from this, it was planned that students with dyslexia-like profiles - marked by their high levels of dyslexia-ness - might be identified from the research subgroup of supposedly non-dyslexic students. Quantitative analysis of the metrics used to gauge these criteria have addressed the primary research questions which hypothesize that knowing about one's dyslexia may have a stronger negative impact on academic confidence than not knowing that one may have learning differences typically associated with dyslexia. Given that this is established, it will be suggesting that labelling a learner as dyslexic may be detrimental to their academic confidence in their studies at university, or at best, may not be as useful and reliable as previously believed (Elliott & Grigorenko, 2014).

 

The research design devised an innovative process for collecting data by utilizing recently developed enhanced electronic (online) form design processes (described below). The research participants were all students at university and no selective nor stratified sampling protocols were used in relation to gender, academic study level or study status - that is, whether an individual was a home or overseas student - although all three of these parameters were recorded for each respondent and these data have been used throughout the analysis and discussion when considered apposite. For students recruited into the dyslexic students group, information was also collected about how they learned of their dyslexia because it was felt that this may be pertinent to the discussion relating to the effects of stigmatization on academic study.

 

The research design adopted a mixed methods approach although the main focus has been on the quantitative analysis of data collected by means of a self-report questionnaire. The questionnaire was developed for electronic deployment through the project's webpages, and it adopted a continuous response scale approach uniquely developed for this project by taking advantage of new online form processes now available for incorporation into web-browser page design in preference to traditional, fixed anchor point scale items. The rationale for adopting a continuous scale approach was to try to mitigate the typical difficulties associated with anchor-point scales where the application of parametric statistical processes to non-parametric data is of questionable validity (Ladd, 2009; Carifio & Perla, 2007). In addition to recording value scores, research participants were also encouraged to provide qualitative data which were collected through a 'free-text' writing area in the questionnaire. The aim has been to use these data to add depth of meaning to the hard outcomes of statistical analysis where this has been considered helpful and appropriate.

 

This method of data collection has been chosen for several reasons: 1. Because self-report questionnaires provide reliable data in dyslexia research (e.g.: Tamboer et al., 2014; Snowling et al., 2012); 2. because it was important to recruit participants widely from the student community of the researcher's home university and if possible, from other HE institutions (although only two others responded to the invitation to participate); 3. because it was felt that participants were more likely to provide honest responses in the questionnaire if they were able to complete it privately, hence avoiding any issues of direct researcher-involvement bias; and 4. because the remoteness of the researcher to the home university would have presented significant practical challenges were a more face-to-face data collection process employed.

So as to encourage a good completion rate, the questionnaire was designed to be as simple as possible whilst at the same time eliciting data covering three broad areas of interest. Firstly, demographic profiles were established through a short, introductory section that collected personal data such as gender, level of study, and particularly whether or not the participant experienced any specific learning challenges. The second section presented Sander and Sanders’ (2006, 2009) ABC Scale verbatim. This was used to gauge participants' academic confidence as employed in other studies with university students (e.g.: Sander et al., 2011; Nicholson, et al., 2013; Hlalele & Alexander, 2011). Lastly, a detailed profile of each respondent's study behaviour and attitudes to their learning was collected, and this section formed the bulk of the questionnaire. The major sub-section of this has been my approach to gauging the 'dyslexia-ness' of participants. Care was taken throughout to avoid using value-laden, judgmental phraseology such as 'the severity of dyslexia' or 'diagnosing dyslexia' not least because the stance of the project has been to project dyslexia, such that it might be defined in the context of university study, as an alternative knowledge acquisition and information processing capability where students presenting dyslexia and dyslexia-like study profiles might be positively viewed as being neurodiverse rather than learning disabled. 

 
 

II      Metrics

Academic confidence was assessed using the existing, ABC Scale because, as has been discussed, there is an increasing body of research that has found this to be a good evaluator of academic confidence presented in university-level students' study behaviours. Secondly, no other metrics have been found that explicitly focus on gauging confidence in academic settings (Boyle et al., 2015) although there are evaluators that measure self-efficacy and more particularly academic self-efficacy, which, as also described in Section 2, is considered to be the umbrella construct that includes academic confidence. Hence, the ABC Scale is well-matched to the research objectives of this project.

Dyslexia-ness has been gauged using a profiler designed and developed for this project as a dyslexia discriminator that could identify, with a sufficient degree of construct reliability, students with apparently dyslexia-like profiles from the non-dyslexic group. It is this group of students that is of particular interest in the study because data collected from these participants are to be compared with groups of students with identified dyslexia and those with no indication of dyslexia. For the purposes of this enquiry, the output from the metric has been labelled as Dyslexia Index (Dx), although it is acknowledged that the term may be seen as contradictory to the stance that underpins the whole study. However, Dyslexia Index at least enables a narrative to be constructed that would otherwise be overladen with repeated definitions of the construct and process that has been developed.

Designing a mechanism to identify this third group of quasi-dyslexic students has been one of the most challenging aspects of the project. It was considered important to develop an independent means for quantifying dyslexia-ness in the context of this study in preference to incorporating existing dyslexia 'diagnosis' tools for two reasons: firstly, an evaluation that used existing metrics for identifying dyslexia in adults would have been difficult to use without explicitly disclosing to participants that part of the project's questionnaire was a ‘test’ for dyslexia. To do this covertly would be unethical and therefore unacceptable as a research process. Secondly, it has been important to use a scale which encompasses a broader range of study attributes than those specifically and apparently affected by literacy challenges; this is not least because it has been shown that many students with dyslexia at university, partly by virtue of their higher academic capabilities, may have developed strategies to compensate for literacy-based difficulties that they may have experienced earlier in their learning histories.

 

The evidence for this has been presented earlier (Section 2). But also because in HE contexts, research has revealed that other aspects of the dyslexic self can impact significantly on academic study and that it may be a mistake to consider dyslexia to be only a literacy issue or to focus on cognitive aspects such as working memory and processing speeds (Cameron, 2015). In particular, those processes which enable effective self-managed learning strategies to be developed need to be considered (Mortimore & Crozier, 2006), especially as these are recognized as a significant feature of university learning despite some research indicating at best marginal, if not dubious, benefits of self-regulated learning processes when compared with traditional learning-and-teaching structures (Wilson & Lizzio, 2006). Following an inspection of the few, existing dyslexia diagnosis tools considered applicable for use with university-level learners, it was concluded that these were flawed for various reasons (discussed in Section 2) and unsuitable for use in the current project. Hence, the Dyslexia Index Profiler has been developed.

It is important to emphasize that the purpose of the Dx Profiler is not to identify dyslexia in students explicitly, although a subsequent project might explore the feasibility of developing the profiler as such. The purpose of the Profiler has been to find students who present dyslexia-like study profiles such that these students' academic confidence could be compared with that of students who have disclosed identified dyslexia. In this way it might be possible to address the key research question relating to whether levels of academic confidence are related to an individual being aware of their dyslexia. From this, conjecture about how levels of academic confidence may be influenced by the dyslexia label may be possible.

 

Development of the Dx Profiler has been a complex process that built on two, small pilot studies in addition to pertinent theory about the nature of dyslexia. Summary details about how the Dx Profiler has been used in this study is presented below (in sub-section 3.1(IV)), with a more detailed report on its development provided in the appendices (Appendix 8.1). The decision to report on the Dx Profiler in this way has been taken because this thesis is about how academic confidence is affected by dyslexia-ness, not about the development of the mechanism to evaluate dyslexia-ness.

 

III     Gauging Dyslexia-ness:

An overview of the Dyslexia Index Profiler

 

The Dyslexia Index (Dx) Profiler has been developed to meet the following criteria:

  • it is a self-report tool requiring no administrative supervision;

  • it includes a balance of literacy-related and wider, academic learning-management evaluators;

  • it includes elements of learning biography;

  • self-report stem item statements are as applicable to non-dyslexic as to dyslexic students;

  • although Likert-style based, stem item statements avoid fixed anchor points by presenting respondent selectors as a continuous range option;

  • stem item statements are written so as to minimize response distortions potentially induced by negative affectivity bias (Brief, et al., 1988);

  • stem item statements are also written to minimize respondent auto-acquiescence ('yea-saying') which is the often-problematic tendency to respond positively to attitude statements (Paulhaus, 1991), supported by the response indicator design requiring a fine gradation of level-judgment to be made;

  • although not specifically designed into the suite of stem-item statements at the outset which are presented in a random order, likely natural groupings of statements are expected to emerge through factor analysis as sub-scales.

  • stem item statements must demonstrate a fair attempt to avoid social desirability bias, that is, the tendency of respondents to self-report positively, either deliberately or unconsciously. In particular, an overall neutrality should be established for the complete Dx Profiler so that it would be difficult for participants to guess what are likely to be responses that would present them in a favourable light (Furnham & Henderson, 1982).


The Dx Profiler has been constructed following review of dyslexia self-identifying evaluators, in particular, the BDA's Adult Checklist developed by Smythe and Everatt (2001), the original Adult Dyslexia Checklist proposed by Vinegrad (1994) upon which many subsequent checklists appear to be based, and the much later, York Adult Assessment (Warmington et al., 2012) which has a specific focus as a screening tool for dyslexia in adults and which, despite the limitations outlined earlier (sub-section 2.1(VII)), was found to be usefully informative. Also consulted and adapted has been the 'Myself as a Learner Scale' (Burden, 2000), the useful comparison of referral items used in screening tests which formed part of a wider research review of dyslexia by Rice and Brooks (2004) and more recent work by Tamboer and Vorst (2015) where both their own self-report inventory of dyslexia for students at university and their useful overview of other previous studies were consulted.

The Dx Profiler collected quantitative data from all participants. For students who disclosed their dyslexia, this enabled a Control subgroup of students to be identified. Scores from participants who declared no dyslexic learning differences could be compared, and from these, two further subgroups were established: firstly, a subgroup of non-dyslexic students (the Base subgroup) whose scores were substantially below those of students in the dyslexic subgroup – i.e. they presented low levels of dyslexia-ness. Secondly, a subgroup of apparently non-dyslexic students (the Test subgroup) whose dyslexia-ness levels were similar to those in the dyslexic (Control) subgroup – i.e. they presented a medium to high level of dyslexia-ness (or quasi-dyslexia). Thus, the academic confidence of the three subgroups could be compared.

Although extensive work was completed to develop the Dx Profiler, accomplishing this was not the primary aim of this study, more so an element of the research design. In the interests of expediency in this section, a more detailed account of the Dx Profiler development is therefore presented in Appendix 8.1, not least to avoid an over-emphasis on the Dx Profiler in lieu of the ABC Scale. In summary, the final iteration of the Dx Profiler comprised 20 Likert-style item statements where each aimed to capture data relating to a specific study attribute or aspect of learning biography. Participants recorded their strength of agreement with each statement along a continuous range from 0 to 100%. The dimensions as they appeared in the questionnaire are shown in Table 2.

 
 
Table2_2ndedit_edited.png

Table 2:       Dx Profiler dimension statements

At the design stage, item statements were referred to as 'dimensions' and were loosely grouped into scales, each designed to measure distinct study and learning management processes. At the outset, these were determined intuitively into 5 categories or subscales: Reading; Scoping, Thinking and Research; Organization and Time-management; Communicating Knowledge and Expressing Ideas; Memory and Information Processing. With results available to inspect post hoc, principal component analysis (PCA) applied dimensionality reduction to re-determine these scales and new dimension groupings emerged. These were referred to as Dx Factors, each comprising their respective suites of dimensions as determined through the PCA process and renamed accordingly:

  • Factor 1: Reading, writing and spelling;

    • Dimension 1:   When I was learning to read at school, I often felt I was slower than others in my class.

    • Dimension 2:   My spelling is generally good.

    • Dimension 6:   In my writing, I frequently use the wrong word for my intended meaning.

    • Dimension 8:   When I’m reading, I sometimes read the same line again, or miss out a line altogether.

    • Dimension 9:    I have difficulty putting my writing ideas into a sensible order.

    • Dimension 20:  I get really anxious if I’m asked to read ‘out loud’.

  • Factor 2: Thinking and processing;

    • Dimension 10:  In my writing at school, I often mixed up similar letters like ‘b’ and ‘d’, and ‘p’ and ‘q’.

    • Dimension 11:  When I’m planning my work, I use diagrams or mindmaps rather than lists or bullet points.

    • Dimension 15:  My friends say I often think in unusual or creative ways to solve problems.

    • Dimension 16:  I find it really challenging to make sense of a list of instructions.

    • Dimension 17:  I get my ‘lefts’ and ‘rights’ easily mixed up.

    • Dimension 18:  My tutors often tell me that my essays or assignments are confusing to read.

    • Dimension 19:  I get in a muddle when I’m searching for learning resources or information.

  • Factor 3: Organization and time-management;

    • Dimension 3:   I find it very challenging to manage my time efficiently.

    • Dimension 5:   I think I’m a highly organized learner.

    • Dimension 7:   I generally remember appointments and arrive on time.

  • Factor 4: Verbalizing and scoping;

    • Dimension 4:   I can explain things to people much more easily verbally than in my writing.

    • Dimension 14: I prefer looking at the ‘big picture’ rather than focusing on the details.

  • Factor 5: Working memory;

    • Dimension 12: I am hopeless at remembering things like telephone numbers.

    • Dimension 13: I find following directions to get to places quite straightforward.

 

Through this extensive development and design process, the Dx Profiler met its design specifications and was used confidently to gauge the dyslexia-ness of the participants in the study.

 

IV    Gauging academic confidence

The Academic Behavioural Confidence Scale

 

The ABC Scale developed by Sander and Sanders (2006, 2009) has generated a small but focused following amongst researchers who are interested in exploring differences in university student study behaviours and academic learning management approaches (see sub-section 2.2(IV)). However, it is pertinent to provide a brief, summary overview here of the position of the ABC Scale in this project.

There appear to be no peer-reviewed publications which explicitly explore the impact of dyslexia on academic confidence as defined through the rationales which underpin the ABC Scale - that is, in relation to self-regulated learning, typified by academic learning management skills presented by study behaviours of students at university. The only previous studies found have been two, unpublished, undergraduate dissertations. Of these, Barrett's (2005) study is not available to consult due to access restrictions at the home university, and in spite of the title, 'Dyslexia and confidence in university students', it is not known whether academic confidence or confidence more generally, was the focus of the research. However, it is possible to know more through the Sanders et al. (2009) reference to that dissertation in their paper, exploring gender differences in the academic confidence of university undergraduates, in which Barrett's study is cited as providing evidence that dyslexia impacts on academic confidence.

 

Asquith's later (2008) project, however, is available to consult. That study used the ABC Scale to compare levels of academic confidence between dyslexic students at university who were in receipt of learning support and students not receiving support, both dyslexic and non-dyslexic. The study also explored levels of self-esteem and showed that students with dyslexia present significantly lower levels of both academic confidence and self-esteem than their non-dyslexic peers, but that dyslexic students receiving learning support had elevated levels of both academic confidence and self-esteem in comparison to dyslexic peers who were not receiving support. The 24-item ABC Scale was used to measure academic confidence, Vinegrad's Adult Dyslexia Checklist gauged students' likelihood of presenting dyslexia and students' self-esteem was evaluated through the Rosenberg Self-Esteem Scale (Rosenberg, 1979). What is not clear is how Asquith dealt with the ethical issues of using a widely available, proprietary screener for dyslexia as the means to identify students with dyslexia who were not taking up learning support, hence assuming that these students were unidentified dyslexics. There is no mention about whether these participants were later informed that the dyslexia screener had identified the strong likelihood that they may be dyslexic.

 

Hence with no other studies found, it is assumed that this current project has found a gap in the research. It has avoided ethical difficulties around covertly screening for dyslexia through use of an existing and clearly attributable dyslexia screening tool by specifically developing a profiling instrument which draws on differences in academic learning management attributes as the discriminator. By defining the outcome of the profiler as indicating a level of dyslexia-ness, it has been possible to identify students who may be presenting apparent dyslexia but who are otherwise not formally identified as dyslexic without the use of a formal dyslexia screening tool. Furthermore, the research methods devised in this study are able to demonstrate that apparently non-dyslexic students with levels of dyslexia-ness that are, nevertheless, in line with those of formally identified dyslexic students, present higher levels of academic confidence than their dyslexic peers.

 

Hence it logically follows that because much of the research evidence presented in the literature review (Section 2) supports indications that academic confidence impacts on academic achievement, it may be possible to suggest that formally identifying dyslexia in students at university may not be as beneficial as previously assumed. This may be especially significant since the typical learning development opportunities most usually afforded to dyslexic students are becoming more widely available to all students in university communities.

V     Collecting information

The rationales, justifications and challenges

 

As this project is focused on finding out more about the academic confidence of university students and relating this to levels of dyslexia-ness, the data collection objectives were to:

  • design and build a data collection instrument that could expediently and unobtrusively gather information about academic confidence and aspects of dyslexia-ness in information formats that could easily be collated and statistically analysed once acquired from a range of university students;

  • ensure that the data collection instrument was as clear, accessible and easy-to-use as possible noting that many respondents would be dyslexic;

  • ensure that the data collection instrument was able to acquire information quickly (15 minutes was considered as the target) to maintain research participant interest and attention;

  • ensure compliance with all ethical and other research protocols and conventions for data collection according to guidelines and regulations specified by the researcher's home university;

  • design an instrument that could be administered online for participants to engage with it at their convenience;

  • enable participants to feel part of a research project rather than its subjects, and hence them to engage with it and provide honest responses;

  • to maximize response rates and minimize selection bias for the target audience.

 

These objectives were met by designing and building a self-report questionnaire that would be fit-for-purpose in this project. Carefully constructed survey questionnaires are widely used to collect data on individuals' feelings and attitudes that can be easily numericized to enable statistical analysis (Rattray & Jones, 2007) and are one of the most commonly used processes for collecting information in educational contexts (Colosi, 2006). This data collection rationale falls within the scope of survey research methodology in which the process of asking participants questions about the issues being explored are a practical and expedient process of data collection, especially where more controlled experimental processes such as might be conducted in a laboratory, or other methods of observing behaviour are not feasible (Loftus et al., 1985).

 

Developments in web-browser technologies and electronic survey creation techniques have led to the widespread adoption of questionnaires that can be delivered electronically across the internet (Ritter & Sue, 2007). Given my expertise in web-authoring technologies and that one of the aims of the project has been to publish it online through a suite of webpages that have grown and developed dynamically to match the progress of the project, the obvious data collection solution was to build an online questionnaire that could be hosted on the project webpages. Some elements of the online data collection processes remain out of the control of the researcher, for example in comparison to face-to-face interviews a self-report questionnaire provides no latitude for a responsive, interactional relationship to emerge between researcher and participant; this might be useful and appropriate in some circumstances where depth, shades and tones of answers can generate rich, additional data. However, the ability to reach a complete university community of potential participants through the precise placement and marketing of a web-based questionnaire was felt to have significant benefits.

 

These include:

  • the ability for the researcher to remain inert in the data collection process to reduce any researcher-induced bias;

  • the ability for respondents to complete the questionnaire privately, at their own convenience and without interruption which it was hoped would lead to responses that were honest and accurate;

  • ease of placement and reach, achieved through the deployment of a weblink to the questionnaire on the home university's website;

  • ease of data receipt with the standard design feature included in online surveys of a 'submit' button generating a dataset of the questionnaire responses in tabular form for each participant which was automatically sent by e-mail to the researcher's university mail account;

  • the facility for strict confidentiality protocols to be applied whereby a participant's data once submitted were to be anonymous and not attributable to the participant by any means. This was achieved through development of an innovative response form coding process which was built in to the 'submit' feature of the questionnaire form but which still allowed for a participant dataset to be removed from the datapool should a participant request this, post-submission;

  • the ability to ensure that participant consent had been obtained by linking agreement to this to access to the questionnaire.

 

Substantial technical challenges in the design of the electronic questionnaire were encountered and an account of how these were managed is provided below in sub-section 3.2(II).

 

Data analysis employed quantitative statistical processes to sort the datapool according to Dyslexia Index (Dx) value criteria as this metric was the independent variable, and addressed the research hypotheses using data obtained through the ABC Scale. Quantitative data were collected through Likert-style statements which collectively formed scales and subscales. Collecting self-report data using Likert scales in questionnaires presents a significant challenge because when conventional, fixed anchor points are used - commonly 5- or 7-points - the data produced have to be numerically coded so that they can be statistically analysed. There appears to be a long-standing controversy about whether data coded in this way justifies parametric analysis because the coding process assigns arbitrary numerical values to non-numerical data collection responses. Usually this is an essential first stage of the data analysis process but one which then makes the data neither authentic nor actual (Carifio & Perla, 2007; Carifio & Perla 2008). To manage this issue, an innovative data-range slider was developed and incorporated into the questionnaire to provide much finer anchor-point gradations for each scale item, effectively eliminating fixed anchor-points in favour of a continuous scale, hence enabling parametric statistical analysis of the results to be justifiably conducted.

 

Methods

 

3.2  Research Methods

I      Outline

As a primary research project, the underlying rationale has been to collect data about levels of academic confidence of a sample of university students measured through use of the ABC Scale, and to relate the outcomes from this metric to the levels of dyslexia-ness of the students, gauged through the Dyslexia Index (Dx) Profiler, especially developed for this study.

 

Students were recruited with the co-operation of the university's Dyslexia and Disability Service by means of an Invitation to Participate e-mail sent out to all students with dyslexia on the Service's mailing list; and also from the wider university community through an Invitation to Participate which comprised a short, video animation designed to capture the interest of potential participants through its innovative format and subject content [Available at: http://www.ad1281.uk/invitation.html]. A link to the video together with key features about the research project were displayed on the university's intranet page for two weeks and hence achieved maximum exposure for this limited period. The video was hosted on the project's webpages. The aim was to recruit as broad a cross-section of students from the university as possible. Both of these recruitment processes are described more fully in sub-section 3.2(IV). Recruitment was incentivized by offering participants an opportunity to enter a prize draw subsequent to completing the questionnaire with Amazon vouchers as prizes. These two groups were subsequently defined as research groups RG:DI (Research Group: Dyslexia Identified (by self-declaration)) and RG:ND (Research Group: No Dyslexia (also by self-declaration)) respectively.

 

The online survey was developed and hosted on the project webpages with a link to it provided in the participant recruitment initiatives. Students who chose to follow the link were first taken to an introduction which provided information about the purpose of the research, linked access to a more detailed Research Participant Information Statement which included links to all Ethics Approval Documentation, and a consent statement which participants were required to confirm viewing before access to the Research Questionnaire was granted.

 

The Research Questionnaire comprised two major components: the 24-item ABC Scale (Sander & Sanders, 2006; 2009); and the 20-item Dyslexia Index Profiler, developed specifically for this research project. Additional background information was collected to provide the demographic context of the datapool; this included a short section which asked participants to declare any learning differences including dyslexia. Information was also collected relating to broader psycho-social constructs which, at the time of the design and development of the research questionnaire, were intended to form the key discriminator for gauging levels of dyslexia-ness. However, in the light of a simulation exercise to test the feasibility of this, it was decided that an additional metric should be developed and incorporated into the questionnaire which more directly assessed dyslexia-ness through the lens of study-skills and academic learning management attributes - hence the development of the Dyslexia Index Profiler. These psycho-social data have not been incorporated into the project's data analysis process because inspection of the collated data indicated that the research hypotheses could be adequately addressed from data collected through the ABC Scale and the Dyslexia Index Profiler alone. Analysis of these, presently reserved, additional data may form part of a subsequent study. Hence the two primary metrics, the ABC Scale and the Dyslexia Index Profiler were established as the dependent and independent variables respectively.

 
 

II     Designing and building an online questionnaire

Questionnaire design rationales

 

The research questionnaire was built to meet clear design parameters: Firstly, this was based on feedback gained from the online questionnaire developed for the project's preceding Master's dissertation (Dykes, 2008) where evidence from participants suggested that the format met the design objective of being broadly 'dyslexia-friendly'. This means that it had used concise, short sentences that avoided subordinate clauses and were aligned to the left margin only; had used a clear, non-serif font with monospaced lettering (although some compromises had to be made to ensure rendering compliance with web-browsers in use at the time); a keen attempt was made to ensure that instructions were brief and as minimal as possible, using jargon-free phraseology with concise meaning; it had used a sensible balance of text colour to background colour which retained sufficient contrast to be properly readable but avoided inducing glare or other visual stress aberrations such as shimmering, fuzziness or dancing letters, effects commonly reported by many individuals with dyslexia (Beacham & Szumko, 2005). Overall, guidance provided by the British Dyslexia Association was helpful in meeting many of these design objectives and this was supported by my own experience working with university students with dyslexia in academic skills guidance and Disabled Students' Allowance assistive technologies training at the University of Southampton. Other literature was consulted, for example to provide further guidance about design features of online and web-based information systems that enabled better access for users with dyslexia (Gregor & Dickinson, 2007; Al-Wabil et al., 2007), about text formats and web design for visually impaired and dyslexic readers that would improve readability which included consulting a particularly helpful checklist of desired design features to assist with dyslexia compliance (Evett & Brown, 2005). Thus, the design features of that earlier web-based survey were reviewed, developed and adapted for this project's questionnaire.

 

Secondly, a range of later research was consulted to explore how dyslexia-friendly online webpage design may have been reviewed and updated in the light of the substantial expansion over the last two decades of online learning initiatives. These have developed within HE institutions through virtual learning environments  (VLEs) and digital learning object platforms such as Xerte (Xerte Community, 2015) and Articulate (Omniplex Group, 2018), and from external sources such as MOOCs and free-course providers such as FutureLearn (Open University, 2018), all of which rely on modern web-browser functionality (e.g.: Rello et al., 2012; Chen et al., 2016; Berget et al., 2016).

 

Additionally, the literature was consulted to understand how the latest HTML5 web technologies and the rapid rise in usage of smart mobile devices were influencing universal web design (Riley-Huff, 2012; 2015; Henry et al., 2014; Fogli et al., 2014; Baker, 2014). The outcome of this review identified that online information presentations which enshrined strong accessibility protocols not only enabled better access for those with dyslexia and those who experienced visual stress or other vision differences but provided better accessibility and more straightforward functionality for everyone (McCarthy & Swierenga, 2010). Other literature was consulted for guidance about the impact of design and response formats on data quality (Maloshonok & Terentev, 2016), on response and completion rates (Fan & Yan, 2010), on the effectiveness of prize draw incentivizations (Sanchez-Fernandez et al., 2012) and invitation design (Kaplowitz et al., 2011), and about more general web form design characteristics recommended for effectiveness and accessibility (Baatard, 2012).

 

Hence the project questionnaire was designed according to these specifications:

  • it was an online questionnaire that rendered properly in at least the four most popular web-browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Safari (usage popularity respectively 69.9%, 17.8%, 6.1%, 3.6%, data for March 2016 (w3schools.com, 2016)). Advice was provided in the questionnaire pre-amble that these were the best web-browsers for viewing and interacting with the questionnaire and links were provided for downloading the latest versions of the browsers;

  • the text, fonts and colours attempted to follow the latest W3C web-accessibility protocols, and these were incorporated into design styles to make the questionnaire attractive to view and easy to engage with. W3C Web Accessibility Initiative Guidelines were consulted for this purpose (W3C WAI, 2016);

  • brief information was provided in an opening pre-amble to the questionnaire, relating to the nature of the research and what it was trying to explore; this was a summary of information provided in the Participant Information Statement and the Consent form;

  • an estimate was provided about how long it would take to complete the questionnaire (15 minutes);

  • questions were grouped into short, distinct sections, each focusing on a specific aspect of the research, with each question-group viewable on the webpage one section at a time. The intention was that respondents should be encouraged to work through the full questionnaire in smaller sections and hence not be deterred by its length This was to attempt to reduce survey fatigue and poor completion rates (McPeake et al., 2014; Ganassali, 2008; Flowerdew & Martin, 2008; Marcus et al., 2007; Cohen & Manion, 1994); In the event, only 17 of the 183 questionnaires returned were incomplete (9.2%).

  • a minimum of demographic information was collected at the start of the questionnaire to enable a rapid progression to the main features;

  • the main body of the questionnaire used Likert-style items in groups, and presented these response options using range sliders to gauge the statements. This was to meet the data analysis criteria that as far as possible, data collected would be as close to continuous as possible rather than discrete, hence enabling parametric statistical analysis later, an argument strongly supported in literature consulted (Jamieson, 2004; Pell, 2005; Carifio & Perla, 2007; 2008; Grace-Martin, 2008; Ladd, 2009; Norman, 2010; Murray, 2013, Mirciouiu & Atkinson, 2017). However, the issue about conducting parametric analysis on data generated from Likert-style scales remains controversial, aggravated by a tendency amongst researchers to not clearly demonstrate their understanding of the differences between Likert-style scales and Likert-style scale items (Brown, 2011), compounded by also not properly clarifying whether their scales are gauging nominal, ordinal, or interval (i.e. continuous) variables;

  • the questionnaire scale item statements were written as neutrally as possible, or in instances where this was difficult to phrase, to comprise a blend of negatively- to positively-phrased wording. This was to avoid suggesting that the focus of the questionnaire was on evaluating the impacts of learning difficulty, disability or other learning challenge on studying at university, but rather that the research was using a balanced approach to explore a range of study strengths as well as challenges. A range of literature was consulted to support this design criteria which confirmed the desirability of balancing negative to positive wordings (e.g.: Sudman & Bradburn 1982) although other evidence showed that wording 'polarity' can influence respondent's answers to individual questions with 'no' being a more likely response to negative questions than 'yes' is, to positively worded ones (Kamoen et al., 2013). Barnette (2000) found evidence through internal reliability consistency analysis that the widely claimed supposition that survey items worded negatively as an attempt to encourage respondents to be more attendant to the items was dubious at best, and also that mixing item stem polarity may be confusing to respondents. Hence applying scale item statement neutrality where possible was considered as the safest approach for minimizing bias that might be introduced through scale item statement wording;

  • a free-writing field was included to encourage participants to feel engaged with the research by providing an opportunity to make further comments about their studies at university in whatever form they wished. This had proved to be a popular feature in the preceding Master’s dissertation questionnaire (Dykes, 2008) with information provided by respondents who opted to volunteer their thoughts, feelings and opinions providing rich, qualitative data that aided the data analysis process. Data collected in this way would be incorporated in the data analysis process later;

  • after completing all sections, submitting the questionnaire form would trigger an acknowledgement webpage to open, where a copy of the responses submitted would be available to view together with an opportunity to request revocation of the data if desired;

  • each participant's completed questionnaire response generated a unique identifier when submitted so that any individual dataset could be identified and withdrawn if this was requested, and the data removed from the data collation spreadsheet.

 

An integral part of the questionnaire design preparation stage was to conduct a brief review of existing web survey applications currently available to determine if any provided sufficiently flexible design customizability to meet the design specifications that had been scoped out. The applications reviewed included Google Forms (Google, 2016), SurveyMonkey (Survey Monkey, 2016), SurveyLegend (Survey Legend, 2016), Polldaddy (Automattic, 2016), Survey Planet (Survey Plant, 2016), Survey Nuts (Zapier Inc., 2016), Zoho Survey (Zoho Corp., 2016) and Survey Gizmo (Widgix, 2016). However, the limitations of these proprietary survey design applications were numerous and broadly similar from one app to another. These included: limited number of respondents per survey; strictly constrained design and functionality options; advertising or custom-branding. These were limitations which could only, and variously, be removed by subscribing to payment plans. None of the apps reviewed included the functionality to replace their standard radio button response selection format with range input sliders. Hence given existing expertise in web form design and web-authoring more generally, it was considered expedient to design and build a questionnaire web survey form from scratch that could meet all the design specifications identified and be hosted on the project's webpages.

Questionnaire construction and the HTML5 scaffolding

 

The questionnaire was constructed as a (web) form with the data input fields set through form field input properties. These included input selectors such as drop-down menus, radio buttons and for the main body of the questionnaire, input range sliders. These were created using recently introduced HTML5 functionality as easy-to-use and visually appealing using CSS (cascading style sheet) styling protocols. The input range sliders were used to collect data for all of the Likert-style scale items in the questionnaire.

Each section was accessed using HTML spry accordion panels which enabled each Likert scale to be revealed on demand by clicking its panel header, a process which simultaneously closed the already-open panel. Thus, only one section of the questionnaire was viewable at a time. By clicking a panel header it was possible to return to any section at any time allowing responses to be reconsidered.

A

short paragraph at the foot of the questionnaire thanked the respondent for participating, provided information about the questionnaire submission process, outlined the prize draw incentivization and explained the process for data withdrawal. Submitting the questionnaire activated a form script which converted the form-fields’ data into an e-mail which included the data as a .csv file for direct import into the master Excel spreadsheet.

 

Questionnaire sections and components

 

The questionnaire remains available online [at: http://www.ad1281.uk/researchQNR.html] and a screen-print is shown as Appendix 7.4 but in summary, the questionnaire was structured thus:

1. Participant Demographics:

A question-set to collect demographic data such as respondent gender, student domicile, and study level. This section additionally requested information concerning learning challenges or differences, or non-physical disabilities. Participants who had declared 'dyslexia' as their specific learning challenge were invited to record how they had learned of their dyslexia by choosing options from two drop-down menus that completed the sentence: 'My dyslexia was disclosed/described/identified/diagnosed to me as a learning disability/difference/weakness/strength/deficit/difficulty' (Figure 8).

 
Fig11.png

               Figure 8:          Selecting how dyslexic students learned of their dyslexia

2. Gauging academic confidence: The Academic Behavioural Confidence Scale:

The second section presented the ABC Scale (Sander & Sanders, 2003; 2006; 2009) in its complete, original 24-scale-item format. Each scale item completed the stem statement: 'How confident are you that you will be able to ...' provided at the top of the list of scale items. To register a response, a slider control was adjusted from its default, mid-point position along a range scale from 0% to 100%. The % position of the slider control was displayed in a small active window to the right of the slider (Figure 9).

 
Fig9.png

Figure 9:     Likert scale item continuous range input slider used throughout the research questionnaire.

From data received, a mean average ABC value was calculated for each respondent with no weightings applied to any ABC dimension. This value constituted the variable.

 

3. The six psychometric constructs:

The next part of the questionnaire aimed to gauge each respondent's agreement with 36 statements grouped into 6 subscales of 6 scale-items. The subscales evaluated: Learning Related Emotions, Anxiety regulation & Motivation, Academic Self-Efficacy, Self-Esteem, Learned Helplessness, and Academic Procrastination. Evidence from the literature suggests that discernible differences exist between dyslexic and non-dyslexic individuals in each of these constructs. For example, that levels of self-esteem are depressed in dyslexic individuals in comparison to their non-dyslexic peers (e.g.: Riddick et al., 1999; Humphrey, 2002; Burton, 2004; Alexander-Passe, 2006; Terras et al., 2009; Glazzard, 2010; Nalavany et al., 2013).

 

Humphrey and Mullins (2002) looked at several factors that influenced the ways in which dyslexic children perceived themselves as learners, identifying learned helplessness as a significant characteristic; and a study by Klassen et al. (2008) compared levels of procrastination between students with and without dyslexia finding that dyslexic students exhibit significantly higher levels of procrastination when tackling their academic studies at university in comparison to students with no indication of dyslexia. Each scale item completed the stem statement: 'To what extent do you agree or disagree with these statements ...' where 0% indicated strong disagreement to 100%, registering strong agreement. Statements presented included: 'I find it quite difficult to concentrate on my work most of the time', 'I approach my written work with a high expectation of success', 'I often felt pretty stupid at school'.

 

In the early stage of the research design process, it was considered possible that these six subscales could be combined into a profile visualization which may have had sufficient discriminative power to enable quasi-dyslexic students to be identified from the group of non-dyslexic students. The original rationale was to use the data collected in these subscales to enable strong, six-axis radar-chart visualizations (see Figure 10) to be generated which would be broadly based on the locus of control profiles created in the preceding Masters dissertation (Dykes 2008). In that study, promise had been shown for these visualizations to have discriminative properties such that students with dyslexia presented distinctive profile sets that were in contrast to those generated from the data for non-dyslexic students.

 

To trial the idea in advance of the research questionnaire becoming active, pseudo-data were generated to simulate results for a typically dyslexic, and a typically non-dyslexic individual, based on stereotypical rationales built from my own experience of working with students with dyslexia at university and prior evidence from the previous study. Profiles of mean-average pseudo-data for dyslexia and non-dyslexia generated the background profiles and a known non-dyslexic individual was used to generate the 'This Respondent' profile.

 

Figure 10:   The radar chart visualization of the 6 psycho-sociometric constructs generated from a trial pseudo-respondent.

Although the resulting visualizations were quite different to each other, concern emerged about whether the profiles generated from the real data collected later would present sufficiently visible differences to enable the profile charts to be accurately used as a discriminating tool. It seemed likely that identifying the profile anomalies would be relying on a 'by eye' judgement in the absence of a more scientific, data-analysis-based criteria that was either readily available (none were found in existing literature) or possible to formulate. Therefore, it was considered that a more robust, defendable and quantitative process would be required as the discriminator between dyslexic and non-dyslexic students that could be used as the identifier of quasi-dyslexic students to comprise the Test research subgroup.

 

Even so, the data visualizations were highly interesting and so this section of the questionnaire was not deleted. Once data had been collected, the complete set of profile charts was constructed [Available at: http://ad1281.uk/phdQNRprofiles.html]. It is possible that this process and data may be explored more fully as part of a subsequent study. Figure 10 shows an example of the profile charts, this one generated from data submitted by research participant #11098724, a respondent who had declared a known dyslexic learning challenge, overlaid onto the profiles of mean average data. It can be seen that there are obvious differences between the profile of the mean average data for all students in the datapool with dyslexia, in comparison to the profile of the mean average data for all other students in the datapool.

 

This is particularly noticeable for the constructs Anxiety regulation & Motivation and Learned Helplessness. It can also be seen that the profile for this dyslexic student overlaid onto these is more aligned with the mean profile for dyslexia than with the mean profile for non-dyslexic students. However, such alignment was not so obviously identifiable in many of the other profile visualizations that were constructed. Hence the development of the Dyslexic Index (Dx) Profiler, initially to be a belt-and-braces backstop to cover the possibility that these profile visualizations proved untrustworthy as a discriminator was vindicated. In the end, the data collected through the Dx Profiler proved entirely appropriate and sufficient for addressing the research hypotheses, and hence the data collected from the six psychometric constructs was set aside.

4. Gauging dyslexia-ness: The Dyslexia Index Profiler:

The Dyslexia Index (Dx) Profiler formed the final Likert scale on the main questionnaire for this project and attempts to establish the levels of dyslexia-ness of all respondents by requesting them to: “reflect on other* aspects of approaches to your studying or your learning history - perhaps related to difficulties you may have had at school - and about your time management and organizational skills more generally” (Project QNR section 4). (*other is in reference to the earlier parts of the complete questionnaire). The Dx Profiler was developed and constructed firstly on the basis of the theoretical evidence about characteristics of dyslexia typically observed amongst university students identified with the syndrome, discussed in sub-sections 2.1(II) and 2.1(VII); and secondly from primary data collected from dyslexia assessors and study-support tutors at UK universities.

 

A complete report of this small-scale sub-project is reported in Appendix 8.1(I) but the outcome was to substantially aid the development of the Dx Profiler so that as an evaluative tool, it could be shown to present strong internal  reliability (Cronbach’s α = 0.852, with an upper-limit 95% confidence value of α = 0.889), it appeared valid and fit-for-purpose as the discriminator for dyslexia-ness that would be used to establish the project’s Test, Control and Base research subgroups. In summary, it was considered that the Dx Profiler could be confidently used as the dyslexia-ness discriminator and thus it proved fortuitous that this was developed so that reliance on the earlier idea of the 6-psychometric visualizations as the mechanism for sifting respondents into the three research subgroups would not be necessary.

The 20 Likert-style scale items followed the theme for collecting a participant response through a continuous range input slider to measure the degree of participant agreement with a statement, each one representing one dimension of dyslexia-ness (see Table 1 in sub-section 3.1(IV)). The set of dimension statements were collectively prefixed at the head of the section with the stem query: 'To what extent do you agree or disagree with these statements ...'. Range input sliders could be set at a value between 0% to represent strong disagreement to 100%, representing strong agreement. In the construction stage of the questionnaire, the statements were ordered in a way which broadly grouped them thematically, but this order was scrambled using a random number generator to establish the order in which the statements were presented in the final iteration of the questionnaire. This was to reduce the likelihood of order-effect bias as there is some evidence that the sequence of questions or statements in a survey may induce a question-priming effect such that a response provided for one statement or question subsequently influences the response for the following question when these appear to be gauging the same or a similar aspect of the construct under scrutiny (McFarland, 1981). A weighted mean average Dyslexia Index (Dx) value was subsequently calculated. The weighting process resulted from extensive development work based on data collected in a pilot study about dimensions of dyslexia and is described in Appendix 8.1(I).

5.  How can studying at university be improved? Supporting qualitative data:

A precedent had been set for collecting additional, qualitative data as this had elicited thoughtful and reflective supporting material (Dykes, 2008). In the introduction to this questionnaire, respondents were told that the focus of the research was to explore learning strengths, challenges and preferences in response to the demands of academic tasks at university. Hence this final part of the questionnaire provided an opportunity for respondents to disclose anything else about their study at university. The introduction included an invitation that students might like to suggest how studying at university could be improved in ways that might better suit their learning and study circumstances. A more focused account and analysis of this qualitative data is reserved for writing up into a detailed report as a later study. However, where pertinent, aspects of this qualitative data have been included in the discussion (see Section 5, below).

 

The complete questionnaire was trialled amongst a small student peer-group (n=10) to gain feedback about its style of presentation, ease of use, the clarity of the questions and statements, the quality of the introduction, the length of time it took to complete, any issues that had arisen in the way it had displayed in the web-browser used, and any more general comments that might require a review or partial review of the questionnaire before proper deployment. The outcome of this pilot indicated that no significant amendments were necessary.

Respondent_11098724.png
 

III    Data Process Summary

Questionnaire responses were received by e-mail and identified from the data as being either submitted from a student with declared dyslexia or from a student with no declared dyslexia. Subsequently, raw data were transferred into Excel for initial inspection. Dyslexia Index was calculated for each respondent using the weighted mean average process applied to the 20 scale-items that had been developed at the design stage of the Dx Profiler in the light of the analysis of the pilot study, described in Appendix 8.1(I). Each respondent’s ABC score was calculated using a non-weighted mean average of the 24 scale-items. Figure 11 shows a summary overview of the data process workflow. The complete datapool was transferred into SPSS v24 (IBM Corp, 2016) for further analysis.

Fig11DataProcFlow.png

Figure 11:   Data processing flowchart.

 
 

IV    The datapool and data collection

The sample of students relied on convenience sampling such that the university presented a large cohort of students who were studying at all levels, with either home or non-UK residency status. The aim was to recruit as many students as possible although an exploration of the relationship between sample size and statistical power was conducted, where power is determined by the ability of the statistical tests used to minimize a false negative result - that is, a Type II error - in relation to the null hypothesis. Sullivan (2009) comments on the balance between effect size and sample size by remarking that it is easier to detect a larger effect size in a small sample than if the effect size - that is the difference between the means - is small. Conversely, a smaller effect size would require a 

larger sample in order to correctly identify it. Finding a way to establish a sample size that is appropriate for the desired power level is important. Cohen (1992) suggests that a test that has 80% power is statistically powerful, and Sullivan provides guidance about how to establish a sample size that is appropriate by suggesting that either data from a pilot study should be used, or that results from similar studies published by other researchers can be considered.

 

However, no such prior studies are available as this project is the first to directly explore differences in ABC in relation to the levels of dyslexia-ness in university students. This means that only a post-hoc estimate of the statistical power of the study can be generated based on the data collected and analysed, and the size of the sample, in this case n=166. This might be considered in tandem with effect size calculations based on between-groups differences rather than associations and for future studies that might emerge from this one, there will now be this current study which could be considered as a pilot for exploring the relationships between academic confidence and dyslexia amongst university students. Nevertheless, a post-hoc estimate of the statistical power of this study is provided in the Results (Section 4) more as a demonstration of an awareness of these concepts than as a contributor to the key outcomes of the analysis.

 

Through the recruitment processes, students who chose to participate either did so by responding to the Invitation to Participate e-mail which they had received from the Disability and Dyslexia Service, these students subsequently constituted Research Group DI (RG:DI) whilst those who responded to the Invitation to Participate publicity posted on the university's intranet home page subsequently constituted Research Group ND (RG:ND). It was of no consequence that students with dyslexia who may have found their way to the questionnaire through the links from the intranet rather than as a response to the Disability and Dyslexia Service's e-mail because as part of the opening section, the questionnaire requested participants to declare any dyslexic learning challenges and hence they would be assigned into the appropriate research group.

 

Every questionnaire response e-mail received was generated by the questionnaire submission process which anonymised the data by labelling the response with a randomly generated 8-figure Questionnaire Response Identifier (QRI). The QRI was automatically added to the data field set in the questionnaire by the post-action process for submitting the form as an e-mail and the QRI was also published to the respondent on the Questionnaire Acknowledgement webpage which appeared when the questionnaire Submit button was activated. The Questionnaire Acknowledgement page thanked the respondent for participating, presented a complete summary copy of all responses provided in the questionnaire and added a data withdrawal option through a link to the Participant Revocation Form. Should any participant have chosen to do this (none did), the form requested the QRI so that when submitted, it would be possible to find that dataset and delete it.

 

No respondent contact information was requested as the complete process for data withdrawal could be completed through use of the QRI. The Questionnaire Acknowledgement page also included the option to participate in the prize draw and for this, respondent's contact e-mail or phone number were requested through a short single-entry electronic form but also did not connect these contact details to the QRI. This ensured that complete participant anonymity was preserved at all times making it impossible to connect any specific dataset to any prize draw entrant's e-mail address or phone number for the 166 datasets retained into the datapool. This complete process was approved by the university's Education Department Ethics Sub-Committee as being appropriate and fit for purpose. Screen-prints of these webpages are in Appendix 8.3. Figure 12 indicates the data-collecting process.

Fig12.png

Figure 12:   Data collection process flowchart.  

 
 

V     Procedures for data collation and pre-analysis

Questionnaire responses were received during a period of two months, eventually totalling 183 responses. Of these, 17 were discarded because they were more than 50% incomplete. Of the remaining 166 datasets, 68 were from students with dyslexia, hence forming Research Group: DI, and 98 were from students declaring no dyslexic learning challenges, forming Research Group: ND.

 

On receipt of the form data, each message was identified from the form field tabulated data as originating from either a student with dyslexia or a student with no declared dyslexia and each complete dataset saved into its designated folder. The .csv file of the dataset automatically attached to the form submission was imported directly into Excel. The initial inspection of data in Excel enabled the first iteration of Dyslexia Index values (Dx) to be established from the Dyslexia Index Profiler metric using a weighted mean average of the raw score values ranging from 0 to 100, for each of the 20 Dyslexia Index scale items. This was consistent with the scale-item specifications that had been built in to the Profiler at the design stage (reported fully in Appendix 8.1). This process determined a level of dyslexia-ness for each respondent and was scaled up into a value between 0 (suggesting a negligible level of dyslexia-ness) and 1000. The scaling process was used so that Dx values would be easily distinguishable from ABC Scale values which ranged from 0 to 100. Recall that the principal aim of the Dx Profiler is to find students in Research Group ND who declared no dyslexia but who present levels of dyslexia-ness that are more consistent with their peers in research group DI who have indicated that they are dyslexic, hence establishing the Test research subgroup.

 

The result of this initial data conversion process revealed a Dyslexia Index range of 88 < Dx < 909 for students in Research Group ND and a corresponding range of 340 < Dx < 913 for students in Research Group DI. At the outset and with median values of Dx = 542 and 626 respectively this first showed that the Dx Profiler was showing interesting differences, but more importantly that the similar top-ends of both ranges appeared to be suggesting that a proportion of students who declared no dyslexic learning challenges in their questionnaire (RG:ND) were indeed presenting levels of dyslexia-ness that were of similar values to students in the dyslexic group. Hence early indications were that the Dx Profiler was working as designed.

 

This data conversion process enabled the Base, Test and Control research subgroups to be generated in accordance with Dyslexia Index boundary values that were established at an early stage in the data analysis process (see sub-section 4.3(III)). In summary, the outcome of this subsequent analysis enabled the value of Dx = 400 to be established as the upper boundary for datasets in research group ND to be sifted into the Base research subgroup, and a boundary value of Dx = 592.5 set to sift datasets from research group ND into the Test research subgroup and datasets from research group DI into the Control research subgroup.

 

Each respondent's academic confidence was determined by a simple, mean average of the 24 ABC Scale items, where each item ranged from 0 to 100. It is believed that this is the first adaptation of the ABC Scale to provide continuous-scale range input responders in place of the conventional 5-anchor-point Likert-style responders. A further development of this current research project might focus an enquiry on the ways in which this adaptation of the ABC Scale may affect the scale's internal consistency, its construct validity, and topic sensitivity (Albaum et al., 2017). Such a study might usefully add to the very small number of studies which have explored these factors and others, such as data quality and response rates, in web-survey design (e.g.: Roster et al., 2015; Buskirk et al., 2015).

 

VI    Statistical tools and processes

Use of the t-test in preference to ANOVA

 

Through the adoption and adaptation of the ABC Scale and the careful design and development of the Dyslexia Index Profiler, both of these metrics are considered as continuous variables, being the dependent, independent measures respectively in this study. Although the datapool has been sifted into the three research subgroups described above, dyslexia-ness remained as a continuous variable across the complete datapool and thus, individual students' data response pairings across the two variables were preserved. This would enable a regression analysis to be considered later to determine whether there exists any predictive association between dyslexia-ness and academic confidence.

 

The focus of the data analysis in this enquiry has been to determine whether there exist significant differences in mean values of the dependent variable across the research subgroups. It is recognized that the application of ANOVA to this data may have been appropriate, although this process is recommended to be used usually when the independent variable is categorical in nature (Lund & Lund, 2016). In this current study, had dyslexia-ness been categorized into 'high', 'moderate' or 'low', or indeed sub-gradations of these, it can be seen that ANOVA may have been an appropriate statistical test to use (Moore & McCabe, 1999). However, it was felt that the relatively simpler Student's t-test would be a better choice for determining whether there exists or not, significant differences in (population) mean values of ABC where the continuously-valued Dyslexia Index is used as the independent variable.

 

In this way, a matrix of t-test outcome-pairs could be constructed which would identify significant differences not only between levels of ABC for the three research subgroups, but also at a factorial level both of ABC and of Dyslexia Index following a principal component analysis of both variables. It is recognized that the t-statistic used in the t-test forms the basis of ANOVA in any case where the required F-statistic in ANOVA is exactly equal to t2(squared). It is possible that this analysis decision may be reconsidered perhaps as a recommended project development by redefining Dyslexia Index as a categorical variable and establishing clear, categorical boundaries containing ranges of dyslexia-ness that could be assigned such categories as 'low', 'low-to-moderate' ... etc. In this way, an ANOVA would then be an appropriate statistical analysis to perform.

Effect sizes

 

Effect size challenges the traditional convention that the p-value is the most important data analysis outcome response to determine whether an observed effect is real or can be attributed to chance events (Maher et al., 2013). The use of effect size as a method for reporting statistically important analysis outcomes is gaining traction in education, social science and psychology research (Rollins, et al., 2019), not least in studies about dyslexia, where it is claimed to be a vital statistic for quantifying intervention outcomes designed to assist struggling readers (ibid).

 

Effect size values are a measure of either the magnitude of associations or the magnitude of differences, depending on the nature of the data sets being analysed. Effect size is easy to calculate, indeed the simplest result is the absolute difference between the means of two independent groups' data sets. Cohen’s ‘d’ is an improved measure, derived by dividing this result by the standard deviation of either group (Cohen, 1988) and is commonly used (Thalheimer, 2002).

 

Effect size is useful as a measure of the between-groups difference between means, particularly when measurements have no intrinsic meaning, as is often the case with data generated from Likert-style scales (Sullivan & Feinn, 2012, p279).  Hence at an early stage of planning the data analysis process, effect size measures were chosen to be the main data analysis outcomes although in preference to Cohen’s ‘d’, the alternative effect size measure of Hedges' ‘g’ was used because this measure takes better account of the sample sizes of the relative distributions by using a 'pooled' (that is, a weighted) standard deviation in the effect size calculation (Cumming, 2010). This is especially appropriate when the sample sizes are notably different, which is the case in this project.

Principal Component Analysis

 

The process of Principal Component Analysis (PCA) performs dimensionality reduction on a set of data, and especially a scale that is attempting to evaluate a construct. The point of this process is to see if a multi-item scale can be reduced into a simple structure with fewer components (Kline, 1994).

 

As a useful precedent, Sander and Sanders (2003) recognized that dimension reduction may be appropriate and had conducted a factor analysis of their original, 24-item ABC Scale which generated a 6-factor structure, the components of which were designated as Grades, Studying, Verbalizing, Attendance, Understanding, and Requesting. Their later analysis of the factor structure found that it could be reduced into a 17-item scale with 4 factors, which were designated as Grades, Verbalizing, Studying and Attendance (Sander & Sanders, 2009). The reduced, 17-item ABC Scale merely discounts 7 dimensions from the original 24-item scale which is otherwise unamended, and so in this project it was considered appropriate to deploy the full, 24-item scale to generate an overall mean ABC value in the analysis so that an alternative 17-item overall mean ABC value could also be calculated to examine how this may impact on the outcomes.

 

But much like Cronbach's 'alpha' as a measure of internal consistency, factor analysis is ascribable to the dataset onto which it is applied and hence, the factor analysis that Sander and Sanders (ibid) used and which generated their reduced item scale with four factors was derived from analysis of the collated datasets they had available from previous work with ABC, sizeable though this became (n=865). It was considered therefore that the factor structure that their analysis suggested may not necessarily be entirely applicable more generally and without modification or local analysis, despite being widely used by other researchers in one form (ABC24-6) or another (ABC17-4) (e.g.: de la Fuente et al., 2013; de la Fuente et al., 2014; Hilale & Alexander, 2009; Ochoa et al., 2012; Willis, 2010; Keinhuis et al., 2011; Lynch & Webber, 2011; Shaukat & Bashir, 2016). Indeed when reviewing the ABC Scale, Stankov et al. (in Boyle et al., 2015) implied that more work should be done on consolidating some aspects of the ABC Scale, not so much by levelling criticism at its construction or theoretical underpinnings but more so to suggest that as a relatively new measure (> 2003) it would benefit from wider applications in the field and subsequent scrutiny about how it is built and what it is attempting to measure.

 

Hence conducting a factor analysis of the data collected in this project using the original 24-item ABC Scale is worthwhile because it may reveal an alternative factor structure that fits the context of this enquiry more appropriately.

Andrew Dykes B.Ed., M.A., M.Sc., CELTA, FHEA

Academic confidence and dyslexia at university

A PhD Research Project October 2014 - May 2019

Middlesex University, London

Andrew Dykes B.Ed, M.A, M.Sc, FHEA

ad1281@live.mdx.ac.uk; academic@ad1281.uk

+44 (0)79 26 17 20 26