Chapter 3: Methodology

This study was designed to evaluate whether an all-female experiential entrepreneurship education program, Ewits, had a positive impact on participating women’s entrepreneurial attitudes and intentions including what factors contributed to this impact. The study also considered how involvement in the program helped participating women prepare to deal with barriers to entry and retention. Through a mixed-methods design the study analyzed five cohorts of data including participant applications, end-of-course surveys, and follow-up surveys and conducted interviews with 15 participants from three stakeholder groups. The quantitative data were analyzed using R, and the qualitative data were thematically coded using NVivo. The resulting analysis were reported in a case study format.

Research Questions

The questions this research addressed are:

  1. How does Ewits strive to help women overcome barriers to entry into technology entrepreneurship?
  2. What impact does Ewits have on participants’ entrepreneurial attitudes and intentions?
  3. How do participants describe their experience with entrepreneurship?

Research Design

The case study research approach for this study includes both quantitative (descriptive statistical analysis of demographic data as well as statistical analysis of program end-of-course surveys) and qualitative approaches (program document review and informal semi-structured interviews). This is a case study within the bounded system of an educational program and its participants, mentors, and program organizers. By using the case study approach, we are better able to tell the story of the educational program and its participants. By looking more deeply at both the program and the participants, we can learn more about why and how they approach entrepreneurship, and how program participation has affected their entrepreneurial competencies, attitudes, and intentions. This research was conducted using the guiding principles of feminist epistemology.

Case Study Approach

Case study is a research approach where the investigator explores a real-life, contemporary bounded system over time. The research uses multiple sources of information, which may include observations, interviews, audiovisual materials, documents, and reports. The resulting research is reported as a case description with case themes. The research could include a single case (a within-site design) or a multiple case approach. The purpose of the case study is to understand a specific issue, problem, or concern. The results present an in-depth understanding of the case. Collecting many sources of data is required because relying on one source is not enough to develop this in-depth level of understanding (Creswell, 2013, Merriam, 1988, Merriam and Tisdell, 2015).

This research report uses a case study within a bounded system. The case study tells the story of Empowering Women in Technology Startups (Ewits) and its participants (organizers, mentors, and learners). By looking more deeply at both the program and the participants we can learn more about how they approach entrepreneurship and how their participation in Ewits has affected their entrepreneurial attitudes and intentions. The case study approach was selected because the study reviewed multiple sources of information in a bounded system. A within-site design is used to look at the program over time. The research examined data from five cohorts of Ewits participants. The results are reported as a case description along with themes that emerge from the study.

Mixed-Methods Design

A mixed-methods design allows the researcher to use a variety of data collection methods which take advantage of the strengths of both quantitative and qualitative methods (Creswell and Clark, 2011). Both together can be used either linearly or iteratively to understand the problem more thoroughly faced by the community and work together with them to develop solutions or make a change (Creswell, 2013, Creswell and Clark, 2011, Mertens, 2012, Teddlie and Tashakkori, 2009).

The mixed-methods design was selected because it allows the researcher to address some of the concerns of earlier research in the ability to tell a deeper story about study participants. This is consistent with recommendations from Ahl, Henry, and Marlow (2012, 2016) recommending more innovative methodological approaches which utilize qualitative data and avoid male/female comparisons. Data collection occurred in two phases. During Phase One, existing program artifacts, including learner applications, summative course evaluations, follow-up surveys, course curriculum, team business plans, investor pitch presentations and judges’ scoring sheets were collected and analyzed. This data included both qualitative data from program artifacts and open-ended survey questions. It also included quantitative data in the form of ordinal response survey questions. Phase Two consisted of semi-structured qualitative interviews with individuals from three groups of program participants (organizers, mentors, and learners). The interviews were thematically coded and analyzed along with data from Phase One to formulate answers to the research questions.

Feminist Epistemology

Feminist methodology is a means of conducting scientific investigations and generating theory from an explicitly feminist standpoint. It is a response to concerns about the limits of traditional methodology and an effort to capture the experiences of women and others who have been marginalized by society and by previous academic research (Henry et al., 2016, Naples, 2007, Ramazanoglu and Holland, 2002). This study utilizes a post-structuralist approach to feminist epistemology. As such, the research a) uses a definition of gender as socially and culturally constructed, b) approaches gender as an influence in technology entrepreneurship, c) is mindful of the intersectionality of race, class, gender, and sexuality, d) closely collaborates with Ewits organizers on data collection and analysis, and e) shares the study’s results to consider and implement future changes in the Ewits program. The research acknowledges and continues to be aware of issues of privilege and the sociocultural paradigms and constructs that affect biases. In addition, the researcher as a formal and informal mentor, has been part of the Ewits community, and, as a female community college computer networking professor and entrepreneur is part of the broader tech and entrepreneurial community.

The data collection and analysis methodologies were selected to conform with current recommendations on feminist approaches to research on gender in entrepreneurship. Feminist research techniques and practices (choice of approach) include discourse analysis, ethnography, case study, interviews, and surveys. Feminist theories about how research is conducted (the approach in action) include qualitative, quantitative, or mixed methods (Calás and Smircich, 2009, Creswell, 2013). Henry and Foss (2016) recommend the use of more in-depth qualitative methodologies such as life histories, case studies or discourse analysis.


The context of this study is an all-female 10-week educational program designed to help women learn how to license a technology through the university technology transfer process and prepare to launch a startup based on the technology. The program is situated within an entrepreneurial incubator located at a large research one university. Program participants were recruited within the academic setting and the local community. The program is designed for women who already have college degrees and who have expressed interest in technology entrepreneurship. The experiential educational model uses a constructivist learning model where participants are matched with teams to serve as founders for mock startup companies. Each team is matched with a mentor and a technology is selected from the technology licensing office. The mentor acts as the mock CEO and helps to guide the team through the business plan creation process.

The teams take part in eight weekly educational sessions which introduce them to the technology licensing process, funding options, and business plan creation. Topics include value propositions, forming the management team, market analysis and strategy, commercialization strategies, intellectual property, financials, corporate structures, sources of funding, business plan development, and company presentations. The teams work between the weekly sessions to develop their business concept, write a business plan, and prepare an investor pitch presentation. The teams present their startup plans at an investor pitch competition during the final week of the program.

Four distinct categories of stakeholders were identified within the program participants. They are: 1) Organizers, 2) Mentors, 3) Learners, and 4) Subject Matter Experts (SMEs). The Organizers are women working in the technology licensing office and in various aspects of technological entrepreneurship who originally envisioned the need for the program and took part in the design and implementation of the program and curriculum. Mentors are successful female entrepreneurs and business women from the local community with a wide variety of expertise in startup companies, venture capital financing, and financial management. The organizers recruited them to serve as CEO’s for the mock startup companies. The Learners are women who applied and were selected to take part in the program. Most of them have already obtained college degrees. Many have graduate degrees, while some are current graduate students. They make up the team members and founders of the mock startup companies. The SMEs are successful entrepreneurs, venture capitalists, and angel investors. They were recruited by the organizers from both inside and outside of the local area to serve as presenters and judges. They helped with the development of curriculum, recorded video presentations, presented during the course, reviewed business plans, rated investor pitches, and gave feedback to the learner teams.

Data Collection

Phase One of the data collection consisted of collecting, compiling, and analyzing existing program artifacts from five annual cohorts of program participants starting in 2012 through 2016. Phase Two of the data collection consisted of fifteen semi-structured interviews, including participants from each of the three stakeholder groups of Organizers, Mentors, and Learners. The SMEs were not interviewed as part of this research.

Phase One (Review of Existing Artifacts)

During Phase One a variety of program artifacts were collected and analyzed. These included learner applications and resumes, end-of-course learner surveys, two annual learner follow-up surveys, mentor, and SME bios; team business plans and investor presentations; judges’ scoring sheets; as well as program curriculum items including presenter videos, participant guides, session outlines, and mentor training materials. For some years, there were minutes available from program organization, mentor, and judge’s meetings, including both pre-planning and follow-up meetings, as well as other correspondence documenting participant feedback and recommendations. While all artifacts were reviewed to develop a deeper understanding of the program, the primary analysis was conducted on the learner application and the end-of-course summative surveys. Table 3-1 describes the available artifacts and the alignment of each to research questions and analysis methods.

Table 3-1. Alignment of research questions to data sources and analysis.

Research Question Data Sources Data Analysis
1. How does Ewits strive to help women overcome barriers to entry into technology entrepreneurship?Program artifacts Organizer InterviewsDescriptive statistics
Case description
Content analysis
Thematic coding
2. What impact does Ewits have on participants’ entrepreneurial attitudes and intentions? Applications Follow-up Surveys
Inferential statistics
Thematic coding
3. How do participants describe their experience with entrepreneurship? Interviews     Thematic coding

To answer RQ1, the following artifacts were analyzed:

  • Program description and curriculum materials were analyzed to develop an overall description of the program and to understand how the organizers attempted to affect learner’s entrepreneurial attitudes and intentions and to address the issues of gender bias and ambient belonging.
  • Participant applications (n=283) were analyzed to develop a demographic description of program participants including degree level, prior entrepreneurial experience, and an understanding of why they wanted to participate in the program.

To answer RQ2, the following artifacts were analyzed:

  • Learner end-of-course surveys (n=151) were analyzed to establish a baseline understanding of program effectiveness. Participant surveys were collected by program organizers for all five cohorts. They are anonymous and required by all team members before they can enter the investor pitch competition. They include questions about program effectiveness and participant self-efficacy on entrepreneurial competencies. Both quantitative and open-ended qualitative question formats are used.

Phase Two (Semi-Structured Interviews)

During Phase Two, interviews were conducted with program participants representing three different stakeholder perspectives (Organizers, Mentors, and Learners). The interviews are used to develop more in-depth understanding of salient themes that emerged during Phase One. The interviews provide a deeper look at the impact participation in this experiential entrepreneurial educational model had on participants’ entrepreneurial attitudes and intentions (RQ2). They also provided insights into how participants constructed their understanding of entrepreneurship (RQ3) including the experience of ambient belonging, socially learned stereotypes, and gender bias. Additional factors and barriers were explored that affected women’s participation in entrepreneurial activities (RQ2). By including the perspectives of different stakeholders, the research questions were examined from multiple points of view.

Program organizers provided e-mail and phone contact information for all stakeholders so that potential interviewees could be contacted. All Organizers (n=8) were contacted for participation. The first five to respond were interviewed. For Mentors (n=35) and Learners (n=283), participants were assigned a random number ranking. Mentors and Learners were then contacted in order of random selection until adequate responses produced the desired number of interviews. Interviews were conducted in-person or via video conferencing. Video conferencing was used to conduct interviews with participants who are not able to meet for an in-person interview. Interviews were conducted using a semi-structured, open-ended protocol (Appendix C). All interviews were recorded and transcribed. Interviewees were allowed the opportunity to review and respond to their written transcript. Some participants used that opportunity to clarify or add to contents of the transcript.

To preserve the interviewee’s confidentiality, all interview participants were assigned a pseudonym generated from a random name generator. Table 3-2 provides the interviewee pseudonyms, their career field and/or discipline, their highest degree attained and a brief description of their previous entrepreneurial experience. The stakeholder groups are not noted in this table as it could be used to identify the interviewee, and thus compromise confidentiality. The entrepreneurial experiences noted are those in which the interviewees had actual experience. They include working in entrepreneurial support services, participating as a founder in one or more startup companies, and working as an employee for a startup company. In one case, the interviewee was a long-time owner of a single company (entrepreneur), in another the interviewee had attempted several startup ventures, but none had yet experienced a level of success that allowed her to leave her primary employment.

In total, fifteen women were interviewed. Five from each of the three stakeholder groups of organizers, mentors, and learners. They represent all five cohorts of Ewits participants with four from 2012, two from 2013, five from 2014, two from 2015 and two from 2016. They hold a variety of degrees with four holding doctoral degrees, seven with master’s degrees, and four with bachelor’s degrees. Their entrepreneurial experience includes seasoned entrepreneurs, early stage startups, entrepreneurial support services, and innovators in larger companies. Their fields of expertise cover a wide variety of science and technology disciplines including engineering, health sciences, business administration and communications.

Table 3-2. Interview Participants, Their Career Field, Highest Degree, and Entrepreneurial Experience

Interviewee Career Field / Discipline Highest Degree Entrepreneurial Experience
Brandi Communications PhD Support Services
Deborah Environmental Engineering MS Multiple startups
Gretchen Computer Science MBA Multiple startups
Harriet Business Administration BS Startup employee
Jan Journalism MBA Support services
Julia Biology BS Multiple startup attempts
Kim Health Science MS Startup employee
Monique Communications MS Startup
Nadine Political Science PhD Technology Startup
Natalie Molecular Biology PhD Candidate Startup employee
Rebecca Builder / Developer BS Multiple startups
Roberta Marketing BA Support services
Rosa Accounting MS Support services
Shari Biochemical Engineering MS Multiple startups
Tasha Medical MD Technology startup

Data Analysis


For this article, the program applications, end-of-course surveys, and interview transcripts were analyzed. The program applications and end-of-course surveys from Phase One included both closed-ended quantitative questions and open-ended qualitative questions. All of the data collected in Phase Two was qualitative data. There are several tools used in this analysis. The initial combining of data sets was conducted using Microsoft Excel. Qualtrics was used to capture the data from the paper surveys. Quantitative data analysis was conducted in R Studio v 1.0.126 using R x63 3.3.3 and RMarkdown v 1.3.  Qualitative data analysis was conducted in NVivo using thematic coding and text analysis.

Phase One Analysis

Before beginning statistical analysis, the datasets from cohorts 2013 – 2015 were combined into a single file and the data tidied. The initial combining of data sets was completed using Microsoft Excel. The data tidying and quantitative statistical analysis was conducted using R. The final data sets have been cleansed to remove all identifying information so it is an anonymous data set for analysis. All participant identifying information and references to program name and university are anonymized. Leaners and teams were given random identifiers so that the Learner relationship to Team assignment is retained, but the identifying information is removed. The resulting data frames were stored for use in data analysis.

Learner application

The program applications include both close-ended quantitative and open-ended qualitative questions. The responses provided by Learners on the program applications were combined with information tracked by the program organizers about Learner acceptance, attendance, and completion. Data from cohorts 2013 – 2016 were combined in a single data set for analysis. This provided the ability to compare results for the entire program, by cohort and within cohorts. The data were analyzed using summary statistics providing an overall description of both the program applicants and the demographics of the Learners. The open-ended questions were exported into a text file and uploaded into NVivo for analysis.

The application was conducted as a paper and pencil application in 2012 and then as an online survey application in 2013 – 2016. The online application contained the same question set as was included on the paper application. A copy of the original printed application is included in Appendix A. Unfortunately, the program organizers were unable to locate the paper applications for the 2012 cohort. Some 2012 participant data was collected from other program documentation to provide as complete of a data set as possible.

The data analysis was completed using the combined tools of Microsoft Excel and R-Studio. Before beginning the analysis, the data was combined into a single data set containing the responses from the 2013 through 2016 cohorts. Each applicant was assigned a random identifier and all personally identifiable information was removed from the dataset. The anonymized application data set was stored in the Applicants.csv file. To begin the analysis, the anonymized application data set is read from Applicants.csv and assigned to the Applicants data frame. The Applicants data frame was then filtered to extract the rows corresponding to only the learner data and create the data frame Learners. Learners are defined as the applicants who accepted the invitation to attend the program, attended at least one session, and were assigned to a project team. The Learners data frame is used for all data analysis after the initial Application Decision table is created. This focuses the application analysis on the responses from learners who participated in at least one session of the program.

Learner end-of-course surveys

End-of-course surveys were conducted for all five cohorts. The surveys are anonymous, so cannot be connected to individual learners. However, they do include a question to indicate which team the Learner was assigned to. The end-of-course surveys for cohort 2012 are missing from the collected data set, so this analysis includes responses from 2013-2016 cohorts. The program organizers designed the questions which were updated each year before administration of the survey. Learners were required to complete the survey prior to entering the investor pitch competition. Theoretically, all Learners completing the course should have completed the survey. But the number of surveys available (n=151) were smaller than the number of participants who were designated as completing the course for cohorts 2013-2016 (n=170).

The End-of-course Surveys for cohorts 2013-2015 were provided in the form of scanned copies of the paper surveys. A copy of the End-of-course Survey from the 2015 cohort is included in Appendix B. To prepare the data for analysis, Qualtrics surveys were developed that matched the items in the paper surveys and all surveys were manually entered. The paper surveys were numbered 1-n and this number was recorded as one of the variables in the data set. This allowed validation of data against the paper survey if needed. The end-of-course survey for cohort 2016 was conducted using Survey Monkey and results were provided in electronic format. The data from all four cohorts was merged into a single dataset so that analysis can be run within each cohort and across cohorts. Since there were small adjustments to the survey each year, this required a bit of data manipulation to prepare each year’s data set for merging. See “Tidying the Data” in Appendix D for more information on how this dataset was merged and prepared for analysis.

The end-of-course surveys included both closed-ended quantitative questions and open-ended qualitative questions. The quantitative questions on the end-of-course Survey are ordinal Likert scale and multiple-choice questions. The qualitative questions are open-ended response. The qualitative questions are often attached to an ordinal response question allowing a more detailed response. There are a few standalone open-ended response questions at the end of the survey. The survey questions evolved slightly over the years with additional questions being added in 2015 and 2016. Most of the quantitative questions used ordinal or Likert scale ordinal responses. All quantitative data were analyzed using descriptive statistics, analysis of variance and inferential statistics. The analysis was completed in R-Studio using RMarkdown with a variety of R packages which provided both numeric and ordinal analysis techniques. The qualitative open-ended questions were exported and analyzed thematically in Nvivo. The results are presented as an overview of emerging themes with a sample of representational answers.

The analysis procedures for the Likert and Ordinal scale questions is based on “Descriptive Statistics for Likert Data,” Summary and Analysis of Extension Program Evaluation in R by Salvatore S. Mangiafico (Mangiafico, 2016).  The Likert scale questions in this data set all use five-point scales. Most of them use the scale (1=poor, 3=average, 5=excellent). The scale responses are symmetrical in that there are corresponding options for both positive and negative responses, surrounding a neutral response. Some of the questions include an opt out (N/A) response. The analysis of Likert scale questions included individual question by question analysis and grouping of related items to develop an overall measure. For example, questions about the effectiveness of program elements were individually evaluated and then combined to obtain an overall rating of effectiveness. This overall score can then be used to highlight elements that were either above the overall mean or below the overall mean.

One consideration when evaluating Likert scale data is whether to treat the data as ordinal, interval/ratio, or numeric data. While there is some controversy about which is more accurate (Boone Jr and Boone, 2012, Norman, 2010) evaluating the data as numeric gives the option to use a variety of statistical analysis techniques and evaluating as ordinal allows for the creation of useful data visualizations. One argument for treating the data as ordinal data is that it is often unclear whether the space between each selection item is equal, or if respondents consider the space between “average” and “poor” the same as the space between “average” and “excellent,” or whether “poor” and “excellent” can be averaged to equal “average?” There are also constraints on interval ratios in that Likert data are not necessarily continuous and do not allow for midpoint or off scale responses. Ordinal data can be evaluated using nonparametric tests including bar graphs, medians, and ordinal regression. Treating Likert data as numeric data allows the use of parametric tests such as t-tests and ANOVA, and allows for reporting of means. While some may attempt to challenge the use of parametric tests on Likert scale data, Geoff Norman (2010, p. 631) showed that parametric tests “can be used with Likert data, with small sample sizes, with unequal variances, and with non-normal distributions, with no fear of coming to the wrong conclusion.”

The following R packages were used in this analysis.

  • The ordinal package allows for analysis of Likert data as the dependent variable in ordinal regression.
  • The FSA package provides a Summarize function which provides summary statistics for a factor variable.
  • The psych package provides a general-purpose toolbox for working with both ordinal and numeric data.
  • The lattice package provides additional graphs and visualization options for multivariate relationships.
  • The rcompanion package provides a groupwise median function to calculate medians and confidence intervals for one-way or multi-way data.

Phase Two Analysis

The Phase Two data collection includes 15 semi-structured interviews as well as the qualitative feedback from the open-ended questions on the applications and end-of-course Surveys. This section summarizes the analysis procedures, provides a description of the interview participants, and presents the coding methods and codes used in the qualitative analysis. To preserve the anonymity of all respondents, any identifying information in the transcribed data is blacked out when reporting the findings from this study. This blacked out data can include names of participants, names of companies, organizations, or university affiliations.

Participant interviews

Interviews were conducted either in the interviewee’s office, a local coffee shop, or via video conferencing. Three of the fifteen interviews were conducted using Zoom teleconferencing, the remainder were conducted in person. Each of the interviews were recorded using duplicate recording methods to ensure there was no loss of data due to technical difficulties. One recording was captured using a LiveScribe pen which allowed correlation to notes taken during the interview. A second recording was captured using a cell phone app. For the videoconference interviews, zoom was used along with the built-in recording function. The resulting recordings were uploaded into secure cloud based storage. The duplication of recording proved a beneficial safeguard as two interview recordings were lost due to technical failures. In both cases, the backup recording prevented the loss of audio data.

Qualitative analysis

All qualitative analysis was conducted in NVivo using thematic analysis. Both open-ended qualitative questions from the applications and end-of-course surveys were analyzed along with transcripts from the semi-structured interviews. Text from open-ended questions were exported into text files and uploaded into NVivo. Interviews were recorded using an audio recording device. They were then transcribed and shared with the interviewees for additional input. Two interviewees provided updates and edits to their transcripts. The updated interview transcripts and audio files were imported into NVivo for analysis.

All qualitative data were analyzed and coded thematically using NVivo (Seidman, 2013). The coding process included several rounds of review. An initial code book was developed from themes arising from the literature review. The first step of coding involved aligning the written transcription to the audio recording so that the audio could be listened to while coding the written transcription. The written transcripts were then coded while listening to the audio. This step was very helpful in allowing the researcher to listen to the interview in the voice of the interviewee, as well as to ensure the written transcription was accurate.

During the first pass through the interviews they were coded using four broad categories 1) Background, 2) Entrepreneurial Experience, 3) Ewits Experience, and 4) Gender Related Experience. The coding reports were reviewed and seven emerging themes were identified. They are 1) Role Models, 2) Entrepreneurial Identity, 3) Mentor and Team Dynamics, 4) Empowerment, 5) Do the Work, 6) Safe Environment, 7) On-going Support. During subsequent passes, the coding was updated to ensure these emerging themes were captured. The coding reports were reviewed again, refining the coding alignment and narrowing the focus on the text of the interviews, eliminating unneeded information (Seidman, 2013).  Table 5 lists the themes that emerged, the total number of references identified from all interviewees, and the total number of interviewees who discussed each item. After coding the interviews, the qualitative answers were revisited from Phase One. These answers were then coded using the code book developed for the interview analysis.

Table 3-3. Themes and number of references for each theme

ThemeTotal Number of
Participants Who
Discussed each
Total Number of
References from all
Significant Role Models15Entrepreneurial: 12 (80%)
Female: 9 (60%)
They are empowered1014
Entrepreneurial Identity1516
Mentorships and team
They have to do the work813
Ewits as safe space78
They want more
ongoing support

Limitations and Assumptions

Perspectives of the Researcher

One of the basic tenants of a feminist epistemology is that the researcher is integrally involved and co-constructs knowledge along with the research participants and the cultural environment. As the researcher, my current knowledge about these issues has evolved over time. They are influenced by my experiences as a female working in the information technology (IT) field and as a co-owner of a feminist bookstore. My experiences as an IT professional and as a feminist bookstore owner provided me with firsthand knowledge of many of the issues that are discussed and analyzed in this research. These perspectives evolved as I enhanced my knowledge through review of research on these topics. In addition, I participated as a mentor in the 2014 cohort of Ewits and as a mentor’s mentor in the 2016 cohort of Ewits. This direct participation in the program allowed me to situate myself within the context of the research and gain efficacy with the program participants. In addition, I was able to see and experience the process both working on a team and in supporting other mentors as they experienced the process. I recognize that my perspectives are my own and may be different from others who have had similar experiences. The researcher acknowledges and continues to be aware of issues of privilege and the sociocultural paradigms and constructs that affect biases.


Women differ in many ways including race, gender expression, age, background, and culture. While this paper focuses on the effects of gender on women’s participation in entrepreneurship and technology, it is important to acknowledge that there are many intersecting influences on how women approach their lives and their careers. It is not the intent of this research to ignore the other diversity issues that so desperately need attention. While this study looks at the rates of female participation in technology and entrepreneurship, the rates of minority participation in these fields are also of concern. Throughout this report, both gender and racial diversity data are presented to gain a full understanding of the issue of diversity in these fields. What this research discovers about how women succeed as “the other” in a masculine-gendered field may also apply to other underrepresented minorities. The experiences and barriers faced by women may be different, but the paths through these obstacles may be similar. The support structures needed to overcome gender bias and stereotype threat are the human issues of respect, mentoring, and access which comes realatively easily to individuals who have been privileged as the majority representation of success.

Population Bias

The population studied in this research is a highly-privileged population. They are highly educated and have many resources available to them. All the women who participated in this program self-selected and indicated prior interest in entrepreneurship. What was learned from this population may not be generalizable to the general population. Additional research is being conducted on a parallel program which targets a more broadly diverse, less affluential, and heterogeneous gendered group. The results from this study can be compared to the results from the parallel program to determine how the female-only program compares in its effectiveness.


This study uses a mixed-methods methodology with a case study approach to examine how this all-female model of entrepreneurial education affects participants’ entrepreneurial competencies, attitudes, and intentions. The research design, context, participants, data collection procedures, data analysis procedures, and the study’s limitations were explained. Chapter 4 will include findings from both the quantitative and qualitative portions of the study.