Jon Fansmith: Hello, and welcome to dotEDU, the higher education policy podcast from the American Council on Education. I'm your host, Jon Fansmith, senior vice president of ACE's Government Relations and National Engagement Office, and I am joined, as always, by my illustrious cohost, Sarah Spreitzer.
Sarah Spreitzer: Hey, Jon, how's it going?
Jon Fansmith: It's going great. We are not joined this week by Mushtaq Gunja, who is out traveling highways and byways of America-
Sarah Spreitzer: Is he?
Jon Fansmith: ... spreading the good word. I'm not actually sure where he is.
Sarah Spreitzer: Yeah. I think he might be traveling the byways of Washington, D.C.
Jon Fansmith: Possibly.
Sarah Spreitzer: I'm thinking he had a better offer. I definitely think he had a better offer.
Jon Fansmith: It's not hard to imagine. But in his place, though, we will be joined by one of our ACE colleagues, Derrick Anderson, who is the head of Education Futures here at ACE, as well as Lev Gonick, who is the chief information officer at Arizona State University. And those are two people who have spent a lot of time thinking about artificial intelligence or, as everyone just calls it now, AI.
You've been following the policy side of AI. So we're going to have a conversation about that. I think, obviously, a subject that a lot of people are spending time and energy on these days, and maybe as many questions as answers, but hopefully, we'll get a few answers from our guests. Before that though, yeah, there's a few things going on, right, Sarah?
Sarah Spreitzer: Oh my gosh. So many things. So many things. Obviously, we're recording this on a Tuesday, and it feels like we've already been through a week. But obviously, Jon, one of the things, I think, all of our listeners are probably aware of is all of the protests happening on campuses, encampments, especially because they're butting up against school commencement exercises.
The student protesters are asking for disinvestment of the institutions. They're asking for certain demands, and I think each institution is dealing with things kind of independently. I don't know if there's one institution that we could point to that said, "Oh, yeah. That institution, they've handled it completely correctly."
Jon Fansmith: Yeah. It's a very difficult time to be a college and university president. I think The Chronicle of Higher Education is tracking 50 ongoing protests at different campuses. And as you pointed out, there isn't a clear playbook here. Right? I don't think any one president or any one institution can say, "This has worked flawlessly for us." And that's not a shock, right? You think about the passions around this issue, think about the conflicting demands of the various constituencies, and then just the challenges.
This is not necessarily the sort of thing that I think a lot of college university presidents enter into the position thinking that this is... be dealing with. So it's an enormously difficult position. And I think one of the things that has certainly increased both the challenge of it and the stakes of it is the outside attention they're getting, particularly from policymakers that we've seen in the last couple weeks.
We had Speaker Johnson and Chairwoman Foxx of the Education and Workforce Committee went to Columbia's campus to make calls for the president to resign. There have been numerous letters and other things happening on Capitol Hill--bipartisan, worth noting. This is not just Republicans. This is bipartisan--calling on colleges and universities to more directly address incidents of anti-Semitism on campus. It is a very high-profile issue right now.
Sarah Spreitzer: Well, and I don't know if we've talked about the timing around this, but probably on the last podcast, we talked about the president of Columbia and her appearance before the Committee on Education and Workforce to respond to congressional concern about how Columbia was addressing anti-Semitism on the campus, and what's happening at Columbia seems to have really grown out of that hearing. Yeah. And that was just at the start of April.
Jon Fansmith: You're right. And coming out of that hearing, it was interesting. I think the immediate reaction within D.C. was that President Shafik had done a very good job in terms of dealing with the congressional inquiry and what was very much intended to be a hostile and adversarial environment, but you also started to immediately see concerns from faculty that she was not robust enough in defense of academic freedom.
And certainly, as you pointed out, there was an immediate response by protesters, many of them students, but some of them, as we've learned, not from the campus community, who started escalating the protests, including the encampments that were, at one point, broken up by police, almost immediately resumed as well. And that sort of set the pattern we've seen following at other campuses. I think you're very accurate in assessing Columbia, in some ways, was sort of the turning point. That hearing was the turning point in how these protests have moved on campuses to a new and very, very challenging phase.
Sarah Spreitzer: Yeah. And I know both you and I have been asked by various publications about, "What does it mean? What are we learning from this experience?" And I think it's just, we are right in the middle of it. It is really difficult to say, "This is what we're going to learn from it. We're going to strengthen campus free speech policies. We're encouraging institutions to do certain things." It is hard being in the thick of it trying to think what it's going to look like when we come out of this.
Jon Fansmith: Yeah. And a lot of what we're finding out... I think you're right. I think there will be a lot of time for reflection and learning from this. It's important to keep at the center of this, what we do very well is educate people. I mean, that is fundamental to the exercise of a college or university.
And so, we have seen a lot of campuses have done a great job, where they have made great efforts to bring their community together to teach them about this situation, to bring them together in a way, to debate issues in a civil way, that have had policies that are long-standing, that have been stress tests against these kinds of situations, and have been consistent in applying those policies in a way that will not satisfy everyone, but at least shows a fair and consistent approach.
And those are institutions, frankly, you're not hearing about. A lot of other institutions are having to learn to adopt on the fly, changing policies. That's not a criticism. That just says a lot of people did not anticipate the scope or the scale or the intensity of these protests, really, because we haven't seen this on the national level. I mean, at this scale, I would say probably since the Vietnam War protests.
Sarah Spreitzer: Yeah. And just it being the end of the academic year for institutions, it's, I think, dealing with this. And then I know, Jon, we're going to talk about FAFSA and the delays. I mean, our institutions are dealing with a very uncertain admissions period and packaging aid because of FAFSA. It's a lot to take on, I think, at the end of the academic year.
But speaking of FAFSA, I know there was news last week. We saw the head of FSA, Richard Cordray, step down. What does that mean? Are we finally seeing the FAFSA ship right itself? Are institutions starting to get their ISIRs? Are we seeing aid offers being sent out?
Jon Fansmith: It's probably a good idea to separate the two items, because we did see Richard Cordray, he announced his resignation. In some ways, I think people... I certainly was somewhat surprised by the timing of it, but there had been calls for his resignation, particularly after the Education and Workforce Committee hearing around FAFSA, and the problems with the implementation that have been talked a lot about following that hearing.
So not completely out of left field that he would. That said, it is occurring at an interesting time. We're not out of the woods on FAFSA, as any campus enrollment or financial aid or leadership person will tell you, but it does seem a little bit like we're being stabilized. The department has gone through a relatively uneventful, in a good way, couple of weeks. They have announced that they're getting all of the data that was previously misprocessed or had data errors from the IRS. Those have been fixed.
The ability of students to make corrections where there are errors on their applications, that is up and running. It does seem to be turning in the right direction. Obviously, it's still... I was going to say still very early. It's still very late in the process, and a little early to say that the job has been done, but there are positive signs. And frankly, this is the longest period of consistently positive news we've seen in a while. So hopeful signs, not, in any way, minimizing the significant harm this has already caused.
And I think we are still trying to sort through what this will mean for low-income student enrollment. There was a late surge in the data that we've seen. It looked like more students were applying over the last week and a half, more low-income students. So hopefully, that might reverse some of the losses we were expecting. But again, still early stage to know exactly what's going to come out of all this.
I believe the secretary is going to have to go and defend the FAFSA implementation. Education and Workforce Committee scheduled a hearing that he will be testifying at coming up on May 7th. And at that point, we might learn more from the department about their own internal metrics for how it's going at that point.
Sarah Spreitzer: Well, and even before that, they were asking a lot of questions on the appropriations side in the House. And today, on April 30th, we know the secretary is testifying before Senate Appropriations and will likely be getting some questions about FAFSA, but also getting questions about the final Title IX rules, which were released on Friday of... No. Friday two weeks ago, I think. All the weeks are blending together, and I know our team here-
Jon Fansmith: About 10 days ago. Yeah.
Sarah Spreitzer: Yeah. Our team here really dove into them to try and figure out what is going to change and what is in that final rule. And I think our producers will include a link to the webinar, our just-in-time webinar, where we had a group of experts talking about the final Title IX rules. But I have a feeling that the secretary is going to get some questions, especially from members in more conservative states, about those final rules.
Jon Fansmith: And we've already seen a pretty strong reaction to those rules, as you mentioned, especially from traditionally red states. There's been a number of states' attorneys general who have already announced that they will be filing lawsuits to try and block the implementation of those rules. As I recall, and I may get a few of these wrong, Texas is filing by themselves. Louisiana, Mississippi, Montana, and Idaho are going together in a suit. And then Alabama, Georgia, Florida, and South Carolina, I believe, are also going forward in a suit.
So I don't know that those are going to even be all of them. Those are just states' attorneys general. There undoubtedly will be other legal challenges to this. So as with many of these big regulatory proposals we've seen out of the Department of Education, their future somewhat in doubt because of a complicated legal environment, but certainly a very complicated package of regulations.
I would second what you said about checking out that webinar for turning it around in just a few days. It's a really good summary of the key points for campus folks, and very much focused on what campus officials need to know to come into compliance. And that's important, because maybe first and foremost, I don't know if you mentioned this, Sarah, I forget if you did, the regulations require schools to be updated and in compliance with the regulation by August 1st.
So that is not a very long time to make changes to how you staff around this, the training you implement, internal institutional processes and procedures. These are also, of course, things that always come with a lot of scrutiny. So it's important to be thoughtful about them, do them in the right way, and, concerning the rules, keep changing. Hard to keep up with it.
So a lot here for campuses who, understandably, might be feeling a little overwhelmed as we go through this rundown of all of the interactions with the federal government protests going on, FAFSA challenges, Title IX, and a whole handful of other things, like the overtime rule, and other things that we haven't touched on. But a lot going on.
Sarah Spreitzer: Yeah. You know, Jon, the big question is whether or not AI can help our campuses actually respond to all of these issues.
Jon Fansmith: I think we should ask Derrick and Lev that question first. Hopefully, they have some great answers. I mean, I-
Sarah Spreitzer: I hope so. Fingers crossed.
Jon Fansmith: I think they will. Maybe not on these specific issues, but certainly maybe some ideas about how AI could help reduce some burden on colleges and universities. But we will find out whether they're up to that challenge right after the break. We'll be back in a second.
***
Jon Fansmith: And welcome back. We are joined by Lev Gonick and Derrick Anderson. And we are very happy to have both of you on, in part because Sarah and I were just talking about the fact that we've decided that you two will tell us how AI will solve FAFSA implementation issues, Title IX compliance burden, and the ongoing range of protests on college campuses related to the war in Gaza. So not to put too much of an ask on you as we start this conversation off, but that's sort of the baseline for success we're looking for here. Good?
Lev Gonick: Derrick, I'll let you take those, and I'll just be standing by to help you out.
Jon Fansmith: In all seriousness, I mean, this is a topic that, with all of the things going on in the higher education world, still keeps rising to the top of discussions. And it doesn't matter whether you're on a campus or you're here in D.C. When you're talking to people, almost inevitably, you come around at some point to a discussion of AI.
So we're very happy to have both of you joining us to make a little bit more sense of this and talk a little bit about where we are and where we're going as AI impacts higher education. But I think I should start really at zero, not solving all of the problems we're worried about. And Lev, I might just start by asking you, ASU, Arizona State University, really kind of at the forefront of the adoption and implementation of AI across the campus, a lot of creative ways. Can you talk a little bit about what the vision is for the implementation of AI at ASU and how you're incorporating this into learning, research, all the other areas of campus operations?
Lev Gonick: Well, first, thanks, Jon, for having me on the program. I appreciate the opportunity to chat with you and chat with my friend, Derrick Anderson. This is exam week here at ASU. A year ago, during exam week, I was in an executive committee with President Crow. And at that time, generative AI was very new, and there was a lot of questions about what it was and whether or not it was of relevance to us here at ASU. And I stopped the conversation. I said, "Let me give you a data point. It's exam week. And yesterday, 43,000 students and staff and faculty at ASU initiated a generative AI experience from the ASU campus to OpenAI. How about we focus in on meeting our students where they are?"
And I think from ASU's perspective, that's been very much the journey this past 12 months, including the efforts that were consummated on January the 18th of this year to actually engage with OpenAI to help co-create a framework for how universities could advance research, leveraging these incredibly disruptive and powerful tools for research, certainly to make the university a more frictionless environment for all members of the community, but most importantly, for ASU to figure out together how we can actually leverage generative AI to advance our charter commitment, which is to our student success.
And that's really been the focus this last year, and I'm looking forward to sharing with you and your audience the hundreds of ways in which that work with OpenAI. And more impressively, I think even that, on our own campus, faculty, staff, and students have been leaning in.
Jon Fansmith: Yeah. And do you think you could talk us through a little bit about what exactly that partnership looks like? What does that mean in terms of looking at your institutional policies? How does that actually work? What does that look like?
Lev Gonick: Well, I think, again, in the last year, it started with, I would say, vigorous and open debate about whether this was the right time to make the move away from debate and... Not so much away from the debate, but to turn down the volume on the debate towards the practical questions of how to leverage the technology in ways that, clearly, there's no cookbook yet available, and to begin with questions like, "Do we need a whole bunch of new policies at ASU to actually advance the interest that we have in the research, the teaching and learning environment?"
And to actually advance that work in the end at ASU, as most universities, we've chosen not to introduce new policies, but rather to actually take a look at the existing policies, for example, around academic integrity, and update them so that there's a way for our faculty and our students to level-set expectations as to how generative AI, as one example, can be utilized, and maintaining the commitment to academic integrity, again, as one of the examples.
And thereafter, honestly, that work also then involved significant engagement with the faculty community over a period of nearly a full year to, again, have conversations that were focused in on the practical. And so, we actually set up eight modules of content that we've availed to our faculty, co-produced with some of our faculty leaders. Over 1,700 faculty have actually participated in those eight modules. There have been hundreds of great examples where our faculty and their students are being asked by the media, by their peers, by policymakers to talk about their experience in a very practical way.
And our work with OpenAI, in terms of the pragmatics of it, really came down to advance the journey and to help frame important things that didn't exist before ASU was engaged with OpenAI around making sure that our intellectual property was going to get protected, that our disclosure of student information was not going to happen in the model of what... called the enterprise model of OpenAI, which is now available to higher education but wasn't available before we all got engaged with them, just as some examples. And then obviously creating a library of use cases that not only we are using, but others across higher education are beginning to leverage.
Sarah Spreitzer: So, Lev and Derrick, can I take an even further step back? And Derrick, I know you spend a lot of time in the classroom talking about science policy. Can you provide a definition when we're talking about OpenAI and generative AI? What are these terms that we mean? And therefore, why are institutions of higher education looking to take advantage of these things?
Derrick Anderson: That's a great question. And I think to answer that, I'll take one more step back even further and say that when we're talking about AI, artificial intelligence, we're actually probably talking about lots of different kinds of technologies. But a lot of the generative AI tools that are out there are built on top of technologies that have existed for a long time. We used to call them automation. We used to call them machine learning. We used to call them big data. We used to call them a few other... There's a few...
Lev Gonick: Natural language processing.
Derrick Anderson: Yeah, natural language... Yeah. There's a few other tools. And so, when we're talking about generative AI, we're talking about tools that have the capacity to produce content. And initially, that's text, then it became art, and then it became answers to questions, or I guess, maybe not in that order, but generative AI are tools that respond to a query by producing and generating a digital substance for the query maker to consume.
But that technology has been around for quite some time, and I felt like I had to remind people that technology doesn't fundamentally change who you are or who an organization is, or who a community is. Technologies are always sort of adopted in the context of existing values. And so, cheaters are going to cheat no matter what, and most students don't come to a classroom and pay tuition so that they can cheat. Most students are there to learn, and they want to leverage whatever tool is available to help them learn, inclusive of generative AI.
Lev Gonick: We have data to validate Derrick's last point here around cheating. Our colleagues at Stanford, in their learning engineering institute, have consistently been measuring the question of, quote, unquote, "cheating" pre-gen AI. And the answer, to this day, and we just had a conversation with them two weeks ago, is no statistical difference.
Sarah Spreitzer: Yeah. So that might be how students may be thinking, like AI, not that they're cheating, but how can it be used as a tool? How are you thinking about the use of AI in the classroom at ASU and at other institutions?
Derrick Anderson: I'll give you a quick reaction. So when a new technology... And in fact, this is the core of the article that I and a few colleagues wrote in Scientific American last week or the week before. When a new technology is presented into an ecosystem, like a learning ecosystem, typically, what then happens is, is that curriculum changes, instruction changes, and assessments change.
And so, I think we're going to see that here. And so, the big question for us as instructors is, "If an assignment that I have designed is easily executable with generative AI, and I know that generative AI is ubiquitous, what does that then mean for that assignment?" And the answer is, "Well, I got to change the assignment." And so, it's the same thing that happened with calculators in the math classrooms in the 1970s. If an assignment was easily executable with a calculator, does that then invite me as a math instructor to change the assignment? And the answer was yes. Math fundamentally changed in the 1970s.
And so, writing now is going to... Every assignment that I've had in the past that's been a writing-based assignment, the expectation that I have for my students now is, "Here's your assignment. I've designed this assignment knowing that you have access to generative AI. My expectations are now that you use generative AI to work on the assignment and to produce for me something that is informed, in part, by your responsible use of generative AI." So the assignments change, the assessments change. And particularly, let me just... Yeah.
Lev Gonick: Let me just give you an example of that at scale. So Derrick generally has the opportunity to teach smaller seminars. At ASU, our introduction to writing composition is taught every year to 22,000 students. So let's just go to the other extreme in terms of innovation and creativity of the faculty, and the very creative ways students are beginning to leverage generative AI.
And it is largely to examine the intent of that English comp experience, which is largely, I think a consensus is to help students find their voice, and then to be able to actually express their ideas in their voice. And the typical assignment, sure, you could just have generative AI, quote, unquote, "write" the compositions for you, but there's been a complete transformation of the way in which language comp is being taught, and it's evolving. It's not yet at the full scale of 22,000 students, but it is definitely catching on in multiple parts of our English departments, plural, here at ASU.
And that is to actually encourage students to use generative AI as part of the assignment, and to then actually analyze the difference between their original contributions, their original outlines, and their original versions to what, in fact, has been offered by generative AI, and then to actually analyze the difference between the two and indicate where they think they can continue to demonstrate their own authentic voice in that piece of it.
So that's actually not only learning how to, I think, write better, but also to analyze the authenticity of one's own ideas. And the truth is, for most students, they don't get that in introduction to writing, because you're preoccupied with grammar, and you're preoccupied with the dos and don'ts.
Jon Fansmith: Yeah. I wanted to actually pick up on something you both address and sort of indirectly talked about, the questions around academic integrity, and I think, very compellingly, talked about why that's not really a concern with the implementation of AI. But I will say, part of the reason we hear so many discussions around generative AI and the incorporation into pedagogy, into instruction, everything else, we have heard a chorus, a sort of unending chorus of criticism, too, that this is the end of original thinking. This is the end of scholarship. This is the end of personalized attention from faculty.
And I think, certainly, it's not an equal, one-to-one ratio of critics, skeptics to supporters. But in some ways, you're prophets of the value of incorporating generative AI into the institution. Do you see these concerns? How do you address them? How do you speak to audiences outside of the ASU community, obviously where this has been embraced? What does that look like? What are the reactions you get when you do that?
Derrick Anderson: My first reaction is that as somebody who's optimistic about AI, I remind people that my optimism is rooted in an active recognition that other things will change. And so, in addition to AI now being presented into the classroom, my assumption is, is that we will then reflect on how this can and should change the curriculum, and how this can and should change assessment.
I don't know if people are tracking this, but we have a huge assessment problem in this country. We really don't know how to design assessments that are reflective of the multidimensional intelligence that our students have. And so, it's not like this technology is going to make things worse. We have a problem right now. Hopefully, this can make things better, but the assumption that... I mean, my optimism is rooted, it's paired with this assumption that the curriculum will change, instruction will change, and assessment will change as well.
Sarah Spreitzer: So, Derrick and Lev, what do you see the roadblocks being for faculty or campuses that want to start using AI?
Lev Gonick: There are no barriers to starting using AI. The challenge of scaling AI is we've got multifold issues to sort through. Some of them are related to campus policies. Some of them are probably related to legislative relationships to campus policies, because legislators have begun to weigh in on things that catch their attention, usually not about academic integrity, more things like deepfakes and things like that that are now somehow co-bundled together in their minds around the nature of this new technology that's entering into the public arena.
But again, there are no limits here for us to begin that work. There is, all across the land, a... I have not seen this level of interest among faculty, including, I mean, interest, not only positive, but critical discourse on this topic probably in the 40 years that I've actually been engaged in technology and education, in the practical set of questions about engaging faculty.
What is common to all of the journeys that punctuate are my experience from the early internet to the browser, from the browser to search, from search to mobile, from mobile, obviously, into this arena that we're in here, is that for the faculty who are choosing to engage, it is significantly about professional renewal. It is about being reinvigorated with the opportunities.
And some of these are obviously junior faculty who are coming in and very excited about the opportunities to leverage their graduate work and so forth, but it's also got to do with several of our most senior faculty members who are really, really finding it to be, again, not only playful in the sense of it being generative in their own experience, but also an opportunity to do deep linguistic analysis or deep philosophical analysis and the like. And all of that just has reached an important stepwise function in what was otherwise a 50-year slow boil.
Jon Fansmith: Lev, and you mentioned something, and I want to go back to it for a second here. The barriers on the campus may be related to relationships with legislatures or policymakers. And that has been another area, certainly here at ACE, that we track very closely, and that has been interesting. So far, there's been a lot of discussion around regulation on AI, legislation around the use of AI.
I was kind of struck by your point about deepfakes. I think perhaps the most noisy crossover between the generative AI discussion and Washington, D.C., was, I think, a few months ago, someone created a deepfake video of Joe Biden saying something, and you could just see heads exploding across Washington, D.C., the idea that this technology could be used to literally put words in your opponent's mouth.
And it has sparked this debate. That said, and Sarah tracks this closely for us, there haven't been much, in a way, certainly at the state or at the federal level. Does that create a good environment for the technology to flourish? Are we looking for guidance from policymakers about what are appropriate uses? And Derrick or Lev, I know, Derrick, in particular, you cross over both these areas, so helpful to hear, but I would love to hear from both of you on it.
Derrick Anderson: Yeah. I'll start really quick. So this is actually what I study as a professor, is the role of governments. What's the role of governments and markets in shaping new technologies? And I've done this in a comparative way, systematically, for a long time. And what seems to be the case, and Lev can speak to this in a more practical way, is that we have here, in the United States, a very high tolerance for complexity, chaos, and ambiguity.
And so, our approach to governing new technologies is typically not one of making rules first and then seeing what happens later. We have a really sophisticated tradition here in the United States of bringing together scientific experts, industry actors, and regulators, and saying, "How do we collectively shape new technologies so that they are aligned with the norms and values that we have as a society?" And then when we find ways in which technologies are noncompliant with our norms and values, then we start regulating.
And so, we see that with social media technologies. We see that with nanotechnology, where I've done a lot of my research, synthetic biology. We see this in healthcare, in pharmaceuticals. And so, I think we're going to see that in AI as well. The rules are going to come, I think, when we have very well-documented instances of the technology being used and adopted in ways that are inconsistent with our norms and values. But we always do that in the context knowing that we only have so much influence. There's always going to be other countries and other regimes that are still going to be developing technologies. Lev, what's been your sort of experience?
Lev Gonick: Yeah. I mean, I'm a student of international political economy. I mean, that's my scholarly work. And I would just point out the obvious, which is, there are going to be winners and losers in this new economy, the AI economy that we're in. And there are people who are protecting turf, who want regulation, and there are folks who are new entrants and looking to disrupt, and they want less regulation.
This is a classic challenge in the introduction of probably any policy area, but particularly in the technology space. Again, I think there is a broader context still on a global basis. I think my European colleagues basically say, "The good news, Lev, is we've got regulation. The bad news is we have no AI innovation." Here, we have an incredible amount of AI innovation and a lot of noise. I think Derrick is right. At the end of the day, there is some kind of temporary consensus or consensus that is adequate for a moment until the politics work out.
But at the moment, we're in a very messy moment. The incumbents have sort of a Janus kind of view. Several of the large players, on the one hand, want to actually have an opportunity to have a regulatory framework to be able to keep out some of the more disruptive players. And at the same time, and we're seeing behavior in the marketplace on this, they're buying and writing checks galore to actually immediately start gobbling up a lot of the smaller players to bring them on board. Some of them they'll use, some of them they'll just kill, as they see them as threats to existing business and revenue.
Sarah Spreitzer: So, Derrick, you've talked to the U.S. Department of Education about some of their ideas around the use of AI, and I think it's been interesting, because it's not just the Office of Educational Technology within the U.S. Department of Education, but also the Office of Civil Rights. And I guess, to Lev's point about there's overregulation, there's under regulation, do you think we're going to be able to find that balance? And is it your sense that we're on the right path for that?
Derrick Anderson: Yeah. I'm pretty optimistic that we'll be able to find that right balance, but I'm not expecting us to be flawless in our execution. And so, I mean, one thing that we do really, really well here in the United States is we observe closely and we listen. And so, I think that's what we're seeing with the Department of Education, is a commitment to observe closely and to listen, and not just to listen, but to listen to a diversity of perspectives.
And I think that's a good thing, and I think that that's consistent with the American tradition that we have here in the United States, and that transition spans the federal, regional, state, and local level. But yeah, I wouldn't be surprised if we have moments of overregulation and moments of under regulation, or moments of over attention and moments of under attention for sure.
Lev Gonick: Derrick knows my point of view on this one, and that is, there are bad actors out there. And also, there's actually just a lot of bad code out there. And because the marketplace is so frothy at the moment, I do think that we have an important dialogue that needs to happen, and that I believe organizations like ACE have a particularly important role to do, which is to convene a kind of framework for self-regulation, in coordination with the Department of Education, to make sure that harm, and there could be actually very negative harm, is something that we agree as an industry to self-regulate ourselves on.
And I think the best actors in the edtech industry are looking for a way of differentiating themselves from the noise, and not just in their marketing budgets, but actually in the quality of the code and the quality of the assessment outcomes that they're prepared to engage with third parties to validate. And I do think that this is a very important role, really, for national dialogue.
And again, I think the Department of Education and its ability to convene along with, obviously, ACE, EDUCAUSE, and other organizations should be looking to do so, and so that we can come up with not only what is good and what is bad, but how good is it, and is it good enough to be introduced, especially into the K-12 space. I know that's not necessarily the target audience for our listeners, but there's a continuum here that's hugely important if we want our students, when they arrive at the universities, to have all of the tools and the experiences through their journey as younger folk.
Jon Fansmith: Such a good point.
Derrick Anderson: And I'll just add one more thing.
Jon Fansmith: Yeah.
Derrick Anderson: Lev said that one of the assignments right now is to identify what is good and what is bad. And to be clear, there is good and there is bad. And then a third assignment is to identify what it is that we're working towards. And so, that's sort of like saying, "Hey, here are some guardrails, the good and the bad, and then we're always going to be between the good and bad, but we're going to progress as well."
And so, we have to have this sort of North Star that we're going to be working towards, or a celestial fixed object, knowing that many of our partners are in the Southern Hemisphere. They don't have the North Star. So there's that. But that's the point that we're at right now, is what are we working towards, and how do we stay between the good and the bad?
Jon Fansmith: And I was saying this is a really great point by Lev, and certainly, Derrick, you added to that. We talked about a regulatory environment that... us versus Europe in some ways, lots of regulation, no innovation, lots of innovation, no regulation, and particularly the good and the bad. If we are an industry that is willing to implement the good, pursue the good, and reduce the bad, then the likeliness... Congress always struggles with technology. How do they police technology? Where do they set the lines? What does that look like?
The better job we do, I think the less likely we are to see counterproductive, harmful, limiting legislation and regulations come into effect. So it is on us, but then we get to reap the benefits as well. So maybe a good place to end our discussion today. Lev, Derrick, you did not solve FAFSA, campus protests, or Title IX. Work on it. You're smart guys. You'll figure it out. I have faith. But thank you so much for joining us today.
Derrick Anderson: Thanks for having us. Appreciate it.
Lev Gonick: Thanks for having us.
Jon Fansmith: And thank you all for listening, and we will come back next month with another episode. Thank you for joining us on dotEDU. If you enjoyed the show, please consider subscribing, rating, and leaving a review on your favorite podcast platform. Your feedback is important to us, and it helps other policy wonks discover our show.
Don't forget to follow ACE on social media to stay updated on upcoming episodes and other higher education content. You can find us on X, LinkedIn, and Instagram. And, of course, if you have any questions, comments, or suggestions for future episodes, please feel free to reach out to us at podcast@acenet.edu. We love hearing from our listeners, and who knows? Your input might inspire a future episode.