I just learned about three pretty cool dashboard webpages for showing students energy production / consumption which might be useful for discussing how electrical energy usage is related to climate change.
I'd love to know about other options and if anything like the last two exist for North America or the U.S. Let me know if you've seen anything like these that I should be aware of!
I seem to at least once a semester realize that I have forgotten all the apps and websites that I've seen that help to produce graphs for class use. These graphs can be used for formative assessments or quizzes/exams in class. In no particular order:
GraphSketch.com (I have not used it, I just sort of discovered it by googling.)
There is a pretty good Mac app called GraphSketcher, which is no longer in development, but mostly still works. Alternatives to GraphSketcher are mostly programming environments.
There's another Mac app called Grapher which is basic but sometimes useful.
I have downloaded a spreadsheet with adjustable sliders (filename Adjustable XVT Graphs_2020_Sliders_VBA.xlsm) for making kinematic graphs. (credit Dan Hosey) I can't find a current link to that spreadsheet, but here is version Dan put on Desmos.
Is it better to do traditional physics problems...or would there be value in structuring problems so that the answer is stated in the problem?
For example, when I think of a "traditional" physics problem, I think of something that looks like this:
If air resistance is negligible, determine the maximum height (above its release point) of a ball that is thrown straight upward which is in the air for a total of 3.0 seconds.
But, what if the problem were stated more like this:
Show that when air resistance is negligible, a ball thrown straight up that is in the air for 3.0 seconds reaches a maximum height of 11 meters above its release point.
In my mind, the second version explicitly puts emphasis on the process and the reasoning behind the process, whereas the traditional problem naturally emphasizes the answer to the question. I can see this way being done in the classroom setting, for homework practice as well as assessment purposes.
What am I missing here? Why isn't this done for intro physics classes?
Today in PHYS 201 we connected the motion diagrams to graphs and introduced concepts: position, displacement, average velocity and average acceleration. We also discussed the connection of the slope of a graph and that taking a limit as the time interval becomes small leads to an instantaneous velocity or acceleration. The 10:00 section was able to use calculus to go from a constant acceleration back to an expression for position as a function of time. That section also got the first packet of nTIPERS.
The PHYS 110 class started by looking at the class data from the previous mini-lab. It was not as clean as I'd hoped, but we talked a bit about variance and uncertainty in data and how to improve that.
Then we followed that discussion up with one of my favorite demos - taking a set of 5 balls of various sizes and masses and challenging the groups to order them by weight. The key to this demo is that there should be a small steel ball which weighs LESS than a larger foam ball. Most groups will say the metal ball is the heaviest. Today there were two groups that said the foam ball was not the lightest, but the other four groups said it was. Students are amazed to learn the foam ball is the heaviest and the steel ball is less massive than two of the balls (at least) in the set. It is a dramatic illustration of the body's sensitivity to pressure.
After finishing up the intro to basic physics, we started on Simple Harmonic Motion and got through part of the PHeT demo on Masses and Springs as a mini-lab
Then I challenged the class to make their own motion diagrams using a blinking LED on an arduino captured with a long exposure photo. My idea was to use the exercise as way to have a goal, try something out, analyze the results and iterate until reaching the desired outcome. I think it worked all right, but I wish we had more time.
In Physics 110 we introduced ourselves to each other and then started getting into the basic physics material. I passed out the syllabus and went through that.
We got all the way through the first "mini-lab" I had set up, which was finding the acceleration of the IOLab carts going down a small ramp. I realized as the class was doing the lab that I have no idea how to save the data. Oops. Going to need to figure that out!
Today in PHYS 201 we did this excellent activity which was linked to on twitter. I'm glad I saw it and tried it out with the classes, because I had run out of time after having the class do the FCI and handing out the syllabus. Having only 90 minutes each day is going to make each day seem tighter, but I think overall it will be better.
Last weekend I was at the Illinois Section AAPT meeting, where I gave a presentation of my foray into Standards-Based Grading. My main points in the presentation were that I have observed:
a.) Most of the people who try SBG the first time write too many standards initially
b.) It's really hard to find a list of standards used in college physics classes online
I've been drafting a set of standards that I would feel comfortable using for a first semester physics class. To address the first point from above, I've whittled it down to 18 standards, although several have multiple parts to them.
I believe that I can assess these standards in chunks of less than 18 assessments. I am aiming for 13-14 nominal assessments with the opportunity for re-assessments on any of them.
I am also working on as-of-yet-unwritten lab standard or standards, which I will likely need help with.
To address the second point from my talk, I'm putting the draft up here for review from the community. I would love to see a discussion of physics faculty from all levels getting involved on building a set of standards that work well. (Not that I want the standards to be, uh....standardized on any level beyond a classroom....)
Here is my draft standards for first semester intro physics, algebra-based. We move oscillations and sound to the second semester, in case you're wondering where they appear. Thank you (in advance) for any thoughts you have on them.
Physics 101 Standards (Draft Spring 2014)
1.) I can interpret and construct graphs of objects in 1-D motion
2.) I can apply a logical problem-solving process to model the motion of objects moving in 1-D.
3.) I can resolve vectors into their components.
4.) I can add and subtract vectors graphically as well as by components.
5.) I can recognize situations described by projectile motion and apply an accurate model of the 2-D motion to determine unknown quantities.
6.) I can apply Newton’s laws of motion for objects in equilibrium as well as objects in motion including:
a.) single objects b.) connected objects c.) objects in contact with a spring d.) objects in circular motion
7.) I can recognize situations where the Work-Kinetic Energy theorem applies, and be able to solve problems using the theorem.
8.) I can recognize situations where the conservation of energy principle is appropriate and be able to apply the principles to those situations including:
a.) objects under the influence of a gravitational field b.) objects in contact with a stretched or compressed spring
9.) I can identify situations where impulse is used and correctly apply the momentum-impulse theorem.
10.) I can identify situations where conservation of linear momentum is appropriate and correctly apply the conservation principle to those situations including:
a.) elastic collisions b.) inelastic collisions
11.) I can evaluate (graphically and analytically) the quantities of rotating objects in terms of the linear kinematic equivalents including:
a.) angle b.) angular velocity c.) angular acceleration d.) moment of inertia
12.) I can apply the conservation of energy principle to rotating objects.
13.) I can apply Newton's second law for rotational motion for
a.) objects rotating b.) objects in static equilibrium
14.) I can identify situations where materials are subject to thermal expansion and be able to calculate the change in their length, area or volume.
15.) I can determine the equilibrium temperature when materials of different initial temperatures are brought into thermal contact with each other.
16.) I can differentiate between conduction, convection and radiation mechanisms.
17.) I can apply the ideal gas law and the results of the kinetic theory of gases to calculate properties of gases.
18.) I can determine the energy transferred by heating required to change the temperature of material and cause materials to change phases.
I've been taking baby steps towards standards-based grading (SBG) for over two years, but this semester is the first time that I've really implemented the core ideas of SBG in any of my classes. Last Fall I had (with my colleague) the opportunity to rewrite learning outcomes for our introductory astronomy course, ASTR101. This is a general education survey of astronomy course without a laboratory. It is 3 credit hours and covers the solar system, stars, and galaxies. Our campus assessment specialist pushed us to look at the revised Bloom's taxonomy word list to come up with descriptors for what we wanted our outcomes to be. I really don't like how our campus uses the Bloom's taxonomy, but my opinions are a topic for another time. After the outcomes were written and approved, I realized that I could implement them almost unchanged as standards for a real step towards SBG. Here's what I did: I went from 3 exams plus a final to no midterm exams, but nearly weekly quizzes. Each quiz is "scored" on a 0-5 point scale which measures the mastery of the standard being assessed. Students have optional homework assignments on MasteringAstronomy (which, by the way, is NOT optimized for SBG) but they are required to do the homework if they want to re-assess by retaking the quiz. If they want to retake the quiz for a third time, they have to visit my office for a discussion about the standard before they are allowed a third shot. After the third try, the standard is closed. Grades are weighted - 40% is based on a semester-long astrojournal, 35% is the SBG-style quizzes, 10% is a Just-in-Time-Teaching style reflection/reading review that students submit online, and 15% is a cumulative final. So far, I've had a fairly positive experience with this in astronomy. I should write down my workflow for getting all the assessments prepared and scored. I have had some students come in for reassessments. I am expecting to see more as the semester progresses. What I could really use is a bit of feedback on how the standards are written. I can't change the learning outcomes, but I can tweak the standards if appropriate. There are some standards broken into multiple parts so I could have the option if necessary to break out into multiple assessments. The goal was to have no more than 15-16 assessments. Here's the standards as I wrote them out:
1) Explain how astronomical objects move in the sky.
2a) Explain the cause of the seasons
2b) Explain the cause of moon phases.
2c) Explain the cause of eclipses.
3) Describe how the heliocentric model of the solar system was developed and why it was adopted over the geocentric model of the universe.
4a&b) Apply Kepler's Laws of orbital motion and Newton's Law of Universal Gravitation to objects in the universe.
5) Describe the functions of a telescope and types of telescopes and explain why some telescopes are placed on the ground and some in space.
6) Explain how astronomers use light to determine:
a.) the luminosity of stars,
b.) temperature of stars,
c.) and size of stars,
d.) chemical composition of astronomical objects,
e.) the speed and direction of an astronomical object's motion,
7) Describe the nature of our solar system and how it was formed.
8) Explain how astronomers use the Hertzsprung-Russell diagram to study properties of stars.
9) Describe how stars are formed, evolve and die.
10) Describe the structure and size of the Milky Way galaxy.
11) Compare the Milky Way galaxy to other galaxies.
12) Explain how astronomers know that the universe is expanding and how they determine the age of the universe.
I'm trying to figure how to handle giving quizzes and exam next semester in my algebra-based physics courses, my intro astronomy course and my general education course in musical acoustics.
There has been much talk online recently about Standards Based Grading (SBG) and related assessment strategies. I'm not diving fully into the SBG waters, and currently my issue isn't directly related to going towards SBG. The reason I mention SBG is to give some context.
Introductory Physics Courses
A few years ago when I learned about SBG, I sort of had the wrong idea of how it was supposed to be implemented. I liked the philosophy which allowed for students to learn at their own pace and to be reassessed on understanding of the standards. I also liked the idea of using student-made screencasts (Thanks to Andy Rundquist for leading me down this path) as assessment methods. Because I get to hear the students explain the physics in their own words, I can really find out what they understand and what they are simply regurgitating from class or the book.
Grades in intro physics are made up of the following parts: online reflections of what they did in class and read in the textbook, screencasts done for homework, lab reports, weekly quizzes, midterm exams and a final. I consider the lab reports to be drafts which can be corrected and submitted until they are satisfactory. I also consider the screencast homework assignments to be practice for taking quizzes and exams, so I provide feedback on the screencasts and allow them to be resubmitted as many times as needed until correct.
Quizzes and exams are done in a traditional way - all the students spread out as far away from each other in the classroom and work independently on the quiz or exam for a set amount of time. For quizzes, I provide relevant (and sometimes not-so-relevant) equations, but for the exams students prepare their own equation sheet. I usually give 20 minutes for a quiz and 2 hours for an exam.
General education courses - Intro Astronomy and Physics of Sound, Music and Hearing
In the gen ed courses I do not use screencasts. The only homework that the students are required to do is the classroom reflections. Astronomy is not a lab course, so there are no lab reports, but they do have to do a semester-long astronomy journal project. In the acoustics class, students design and build their own musical instrument. I'm pretty happy with those parts of the grading process.
But the exams are something else. Again, I have typically given "traditional" type exams where all students work independently. I typically supply equations for these classes.
There is a pattern that is starting to emerge over the last few semesters in astronomy. The first part of the pattern is that on the first exam the class average is somewhere in the mid-60% to mid-70% range. For many students it is shockingly low. However, in the 10-ish years I've been teaching the class, the average on the first exam has never strayed far from this mark. Typically we have a discussion of how now they know how the exam will be structured (even though we discussed it thoroughly beforehand) and that they should think carefully about what changes they need to make in preparing for the next exam. I've also been weighting the first exam less than later exams in recent years to try to alleviate concern that their grade is sunk after one poor exam. The next part of the pattern is that on the second exam (out of three midterm exams) the class average goes down. Significantly down. In most semesters before the last 3, the class average would rise to right below about 80%. More recently, the average has declined to the low 60% range.
Frustrated by this pattern, I offered to allow group exams in astronomy on the third midterm. Working together, the students were able to significantly bring up their scores, although implementing the group exam brings in its own set of challenges in terms of how I score it fairly.
What am I really trying to encourage?
There are some maxims that are sort of swirling around in my head whenever I think about what I'm going to do next semester. One is the saying about how students don't really respond to what you want them to do (or what's best for them) but they will respond to what they are graded on. I guess I can't really think of the exact saying right now, but I think a lot about how to incentivize the intrinsic motivation to pursue deep learning without having to provide the extrinsic motivation of points towards a grade.
The other related thought that I can't quite decide how to address is the idea that if I want to encourage a type of behavior or thinking, then it SHOULD be a part of the grade somehow.
So for example, last semester in astronomy we used the lecture tutorials by the CAPER team as purely formative assessments. Students were told they would not be graded on them, so they should work together and feel free to make mistakes on them that we would correct in class. My class never fully bought into the idea taking the tutorials seriously as a way of being actively engaged in the class. Even after the first exam had 80% of the questions based directly on the lecture tutorials, and the students themselves recognized how much of the exam was based on the tutorials they did not believe that collaborating with others on the tutorials was necessary.
And why should they have? I was not going to be rewarding them for working with others as a part of their grade, after all. I think that perhaps if group exams were a part of the course from the start, they would have reason to work with others in the class from the beginning.
But, what about the general physics course? I believe Eric Mazur's Harvard course has some form of open-book, open-note policy on quizzes and exams. Others have used group exams in these courses. What am I trying to encourage? I think I am trying to encourage students to work together collaboratively, but am I grading that way? Should I be? Isn't part of the course figuring out how to take quizzes and exams by yourself?
The real reason I need to figure this out
I have a conference that is going to take me away from school the last week of the semester before finals. I am not happy with this schedule, but there is not much I can do about it right now. What I'd like to do if possible is eliminate in-class exams. Since I typically give three mid-term exams, that effectively gives me back all my time I would be missing at the end of the semester…although it's really never the same. But if I give take-home exams, for example, how should they be structured? Do I explicitly forbid collaboration and trust the students? That seems to go against the classroom dynamic that I would like to foster of students working together. Do I explicitly encourage students to group up and work on it? That would seem to disadvantage students who have busy work and home schedules and cannot easily pop back and forth to campus.
The one idea I've had is to give the exams as take-home exams and allow for students to group up if they want. But instead of them handing in the exam, have them make screencasts for each problem on the exam. That way, I hear each student explain it in their own words, just like the homework. I just don't know if I can grade that many screencasts in a reasonable amount of time.
TL;DR
How can I improve the way I assess and evaluate students next term? How closely does the grading policy align with my philosophy on learning and what can I do to improve that?
"Mr. Khan, you have a team of teacher advisors. If none of them can identify these gaps for you, you need to ask for help from the larger community (and then to reexamine your hiring practices)."
Out of all the criticisms of the Khan Academy, this is the one that upsets me the most. That KA in general, and Sal Khan personally, cannot find it in themselves to reach out to the education community to improve their offering indicates to me that they must not care about having high quality resources on their site, only that they care about having a high quantity of resources.
Over a year ago, I posted my critique of the stellar parallax videos. In my critique I pointed out several (at least four) things that I thought were really good about the explanations of parallax. But, I also pointed out some huge problems with the videos, including the incorrect depiction of the night sky showing East and West directions reversed (starting at about the 8:40 mark in my video). Apparently, Sal Khan does not know which way East and West go.None of the videos on parallax have been changed in the past year.
I realize that I'm just one guy, and maybe KA has no reason to listen to me. (Sal has yet to take me up on my offer to have him talk with us at the Global Physics department or an AAPT conference.) I have taught intro astronomy at least 20 times and we usually spend an hour or two of class time on parallax not including review time or out of class discussions that I have with students. I have invested at least as much time prepping for teaching these classes, so I have at least 40 hours of experience in teaching this topic alone. I know there are teachers out there with even more experience than that, and I am constantly looking to learn from them. When I learn a better way to teach a topic, I alter my approach. Why isn't the same true for KA?
I pointed out above that in my critique of the parallax videos I thought there were some pretty good things about them, including at least one part of the explanation that was unique (and accurate) and I hadn't seen anywhere else. I'm pointing this out again in part because I'm not interested in rehashing any of the tired arguments that supporters of KA bring up over and over again.
Let's talk about how KA can engage with great educators at all levels so that we can all get better at what we are trying to do. Some of the KA staff do engage with others in discussions, but they sometimes miss the point. In the Hacker News discussion that I linked to above Ben from KA says this:
"It's difficult for us to work through all of the submitted issues because most of them are from students who don't understand the problem or have made a mistake in their work, not real issues with the content. We always keep an eye on the number of issues per exercise, and we're lucky to have volunteers who read through the issues and surface the real issues."
To be fair, Ben is talking about responding to issues related to homework-like problems on KA. But, his statement reveals the heart of the problem with what KA is trying to do: engaging learners in a meaningful way using algorithmic methods doesn't always (often?) work. I would argue this must be especially true for conceptual learning, which is the root of deep understanding in most topics.
Since I am going to use the "Measure the value of pi" lab for introducing graphing, equation editors and potentially writing a lab report, I want to have a lab where the goals and the experimental procedure is developed by the students in the class from top to bottom as much as possible. Here's my idea:
I plan to give students a taped box of matches or toothpicks with a random number of matchsticks or toothpicks in them. Alternatively, I could have a single full box that students could have with the ability to put as many or as few matchsticks or toothpicks in as they choose. Hmmm….going to have to think about that one.
There will be only one rule: students cannot open any box until the lab is done and the report is written.
I'm going to ask students: What do you want to know about the box? I want them to write their questions down before they say anything. Then I'll have them discuss in small groups with whiteboards. Then we'll have a short all-class discussion of the questions they are interested in.
I am hoping that the groups will come up with at least "How many matchsticks (or toothpicks) are in my box?" But I would also be thrilled if they came up with questions like "What is the mass of a single matchstick (or toothpick)?" and "What is the mass of the box (and tape)?" These are my goals for the lab, it will be interesting to see what the class comes up with. If needed, we can have a discussion which leads us to these goals.
I will have prepared ahead of time several identical boxes each with a unique number of matchsticks or toothpicks in them and marked on the box itself. I will try to use the same amount of tape on each box, so each box is as identical as possible.
This is the first lab where I will encourage the students determine the process by which they will meet the goals of the lab. I don't know if it's the best way to encourage this process, but it should be a good follow up to the pi lab. I don't think there are many ways to find the mass of a single matchstick or toothpick, the mass of the box and the unknown number in the initial box, other than using a linear fit model, but I'll leave the students to figure that out.
I want to have students make predictions or guesses for the quantities they want to measure before doing the lab. I think it will be interesting to see how these guesses compare to the experimental data. We can have some discussions about orders of magnitude if there are wildly varying guesses.
Then we'll do the lab. I tried this myself and it went really quickly. There's not much to do other than mass the boxes and record the data.
I want to make sure every student has a graph of the data and has used equation editor to express their best fit line equation. If time permits, I want to have them start to write the lab report.
My worry with this lab is again that it is too easy. I hope that by emphasizing the students' control of the goals and procedure it will hold their attention to the end of the lab. Plus, if we finish early, I have an idea for the next lab to do. :-)
The above five lines of python represent the accomplishment of a (small) summer goal of mine: to have one cohesive python install with all of my favorite python packages for writing code for physics classes and research.
I don't know what happened earlier this year, but I had b0rked up my python install on my school laptop. I have a Macbook Pro, and was running into all sorts of problems: 32 bit vs. 64 bit, which install of python to use and whether or not I could use matplotlib and vpython at the same time.
What I ended up with was two installs of python: one that could use vpython and one that could use matplotlib, but never the two at the same time.
As you can see in the screenshot, I have everything working now with the excellent enthought distribution of python. What I learned today was that ALL the packages in the vpython dmg file are required to run vpython. I don't remember my original thought process which led me to believe I didn't need the other packages, but I did. Also, I learned today that you can't always (simply) run vpython calls from the python shell, unless you limit the rate of displaying frames by putting rate() inside a loop. More on that later, maybe.
I know the above packages have significant overlap (scipy extends numpy, pylab has matplotlib, etc.) but I've used each of those in various forms, so I wanted to be able to call any of them without having to THINK about it. Done.
I had my annual review with my dean yesterday. It went well, and although I don't have much to say about the review specifically, it was a time to reflect on things that went well and not so well in class this past year.
What I've come up with are things that I'm going to try in order to build on what has been going well and correct things that are not going so well:
I'm going to completely change how I assign what I call "Reading Reviews". This is probably going to be my biggest change next year, and I plan to have a separate post on this topic soon.
Lab report first drafts are going to be due the next class period we meet after the lab has been completed. Part of the point of a science course should be to model good scientific processes. I am terrible at quickly writing up scientific work that I do. But, if I want my students to start forming good habits, then we need them to write up what they do in lab as quickly as possible. That way there is less time for the memory of what was done in lab to fade. I will continue to use the policy of allowing as many rewrites as needed to get full credit on the lab.
I saw that Joss posted about his concerns about lab report revisions on twitter today:
I love using revise until correct but spend too much time worrying about Ss shooting themselves in the foot without constant deadlines
— Joss Ives (@jossives) June 25, 2013
That's a concern I have, too, but ultimately the responsibility is on the student to meet the expectations put forth in the syllabus.
I give a lot of short quizzes the first part of the semester, hopefully to encourage preparation for exams. I am planning to grade the quizzes in class as soon as they are done. We use a studio physics classroom, which makes for long class periods (at least 2 hours), and since I see my role as a facilitator of the learning environment rather than a lecturer, I will need to have mostly self-directed tasks for the class to work on for the 20ish minutes it will take to score all the quizzes. My goal is to have the immediate feedback encourage the students to come to class better prepared for quizzes and exams. Even though I'm not using standards-based grading specifically, I am striving towards keeping the spirit of SBG. (Excellent post by Frank Noschese.)
Based on what I learned at the New Faculty Experience, I will be trying to make more frequent reflections. I started doing this late in the year using Evernote. Part of my problem with doing it regularly was that I had a tight schedule last semester. In the Fall I won't have that issue.
If I could identify a few things that went really well this year, I would say that in the Spring term especially, I used a lot of the TIPERs activities with whiteboards fairly effectively. I do think I need to better introduce how the class needs to share their whiteboard work with each other. Also, although we were able to do more labs (and more effectively, I think) I have some plans to better introduce some of the skills I expect them to use throughout the term.
I'm really happy with how the last year turned out. I am looking forward to the Fall, but really happy to have the Summer to prepare for it.
It's time for me to rethink how we look at the Doppler effect in class.
When I'm working with a class on the physics of sound, whether it is a general physics class or a physics of music class, the Doppler effect is always one of the topics in whatever textbook we are using. Time permitting, we will look at the Doppler effect. I think it's an interesting topic, and it certainly has important and useful applications in other fields of science: astronomy, medicine, weather, etc.
What I can't stand, though, is the terrible quality of the standard acoustic Doppler effect demos. I'm talking about the demonstrations where you take a sound source and tie a string to it, then whirl the string around your head with the sound source making a circle around you. The class is supposed to hear the change in pitch alternating between getting higher (while the source is moving toward them) and going lower (while moving away from them).
Unfortunately, the demo often has two major flaws with it:
1.) The change is pitch is often within the just noticeable difference (jnd) for non-ear trained musicians.
and,
b.) The change in pitches is almost always overwhelmed by the observation of the relative change in AMPLITUDE. As the source moves away from the class, it seems to be less loud, and louder as it is approaching the class.
The situation is not much better when the demo is using a "Doppler Rocket" or "Doppler Ball" where a sound source is embedded in a soft ball then thrown or slid along a guide string across the room. While it is usually the case that the frequency change can be more noticeable with these demos, since the speed of the ball can be high enough to make the frequency be outside the average jnd, my experience has been that the change in amplitude is even more dramatic with the Doppler Rockets.
What to do? For me, it's interesting. I mean, one of the main ideas in science is that we only want to test one thing at a time. But in this experiment, we seemingly have two inextricably linked quantities that are changing.
I don't have a real good answer for what I want to do to get around these conceptual challenges. I've tried an Interactive Lecture Demo style presentation on the Doppler effect without the success I was hoping for. I may try that approach again, but I'm sort of leaning towards making the Doppler effect a lab activity where students have to confront the two aspects of the experiment (frequency and amplitude) and tease them out separately. I'm not sure if that will work at the introductory level, but I'm willing to try.
Here's another comic that I think could be used to spark discussion in a physics class. Some questions that I would want to elicit from my class:
How long is Marcus watching Jason fall?
What forces are on Jason?
How would we characterize Jason's motion? (constant velocity vs. accelerating)
Then, to go Mythbuster's style on the discussion: is the scenario plausible? Of course, it is a comic strip; it does not have to be plausible. But, if it is not plausible, what conditions would have to be met to get the motion depicted in the comic?
Finally (for this post, at least): is Marcus right in what he says at the end? Would using less helium make a difference?
I have an honors student this semester who is working on vpython simulations of general physics systems. She knew no programming as of two weeks ago, but tonight was really close to having a projectile motion simulation done.
In trying to put velocity vector components on the projectile's position every 10 time steps, she was running into a problem: using the modulus operator only worked on the first two vectors, then did not.
Here was her code:
while ball.y >= -2.75 and int_velo > 0:
rate(100)
t = t + dt
ball.pos= ball.pos + ball_velocity*dt
ball_velocity.y = ball_velocity.y - 9.81*dt
if (t*100)%10 == 0: #doesn't work!!!
vel_xvec = arrow(pos=(ball.pos.x,ball.pos.y,0), axis=(int_xvelo,0,0), shaftwidth=0.5)
vel_yvec = arrow(pos=(ball.pos.x,ball.pos.y,0), axis=(0,ball_velocity.y,0), shaftwidth=0.5)
The comment tells the story. I guess I should know more about the modulus operator in python. It seems simple to use, but I couldn't really figure out what the problem was initially.
The short version of this story is that my student gets to learn about computer round-off errors and how to debug code by sticking a print command in the code so that she can figure out these problems without too much intervention on my part.
Here's my debug line I ended up using, followed by the fixed if statement:
print t, t*100, (round(t*100)%10.0)
if (round(t*100)%10) == 0: #works now!!!
I'm looking forward to a fun semester of python projects!
The
first time I ever taught an introductory physics course from top to
bottom was as a last-minute summer replacement hire at small liberal
arts college. The schedule was intense: four hours a day every morning
and two hours of lab 2-3 afternoons a week. I know I wasn’t the best
classroom instructor, but we had a pretty decent lab and the students
who took the class and worked hard did make it through, and most
importantly they did learn some physics.
One
issue that came up, though, about ⅔ of the way through the summer was
that the students confessed that they hated the quizzes and exams I gave
them, not because they were terribly hard, but because they felt like
they could never guess what I (their instructor) was actually thinking
when I wrote the question. At first I felt like my worst fears had been
realized: that I had wrote confusing and impossibly hard problems. But
after talking with them, I came to realize that the level of the
questions had been appropriate, it was just that they were trapped in a
way of thinking which led them to believe that if they could figure out
what I was thinking, they would be able to figure out the answer to the
questions.
My
response was that the only thing I was thinking was that if they
applied the physics principles which we had discussed in class, no
student would have any trouble answering the questions. Clearly, all of
the students would breeze through the summer, all of them would earn an
A for the course. Of course I was wrong.
I
spent much the rest of the course trying to persuade the class that
they did not need to be mind readers in order to do well. I’m not sure
how many of them actually believed me, but the experienced had a
profound effect on my teaching. Ever since then, I’ve tried to do my
best to make the physics concepts the central focus of all the classes I
teach. It has been a hope of mine that no student would waste any
precious study time trying to divine what is going on inside my head.
But,
as I looked back at that experience during the summer I first taught
physics, I’ve been starting to wonder if maybe there was a lesson that I
missed myself. What if the students weren’t so much trying to read my
mind, but instead they were trying to think like me? Isn’t that what I
wanted? The difference may be subtle, but important, I think. When
students are trying to read my mind, they are looking at a problem or
question and trying to guess what the professor WANTS them to say. When
students are looking at a physics situation and trying to think like
their physics professor, they are trying to apply the thought processes
and analysis skills of a physicist.
That is exactly what I want from my students. I want them to think like a physicist.
I have identified four specific things which I think represent the good parts of the KA.
Breadth of topics - Having what is closing in on 3000 videos in the KA, there is no doubt that the breadth of topics that is covered is incredibly wide. If you are a student in grades 4-12 and/or college, chance are good that the KA has a video which is related to something that is being discussed in one of your classes. That alone doesn't make KA a good resource, but if a video can serve as a launching point for discussion in class that would be a good thing. The more videos they have, the greater the chance that topics in more classes could have discussions related to something students watched on KA.
Resource for "flipping" - Muchhasbeensaid about the potential for using KA to "flip" the classroom model. I don't want to make this discussion all about whether or not trying using "flipping" in a physics classroom is a good thing or not. I can see the value in the idea doing something to encourage engagement with the course material is a good thing. Full disclosure: I made videos for 2 terms that students were encouraged to watch to guide their reading of the assigned material. I believe that critical reading is a skill we overlook in the college curriculum, and that I should be doing more to help my students be better readers. With respect to the KA, I think that if you have decided to use a "flipping" technique in your classroom then you owe to yourself to at least look at KA and decide if it could be a resource for you.
Connection to Peer Instruction - One of the basic principles of Eric Mazur's "Peer Instruction" technique of teaching is that the students learn by talking to each other more effectively that by hearing a lecture because the students in class who just learned the concept can explain the concept in a way that makes sense to others in the class. That's the "peer" in Peer Instruction, right? Students learn from other students better since the professor has forgotten what it was like to not understand the concept and can't connect with a struggling student as well as another student.
Virtual Tutor - I do believe that one of my jobs as a teacher is to have alternate ways of explaining a concept to students. Not everyone is going to understand every concept the first time we cover it, and there may be students who don't understand something the second, third or tenth time I explain it. If a KA video provides an alternate explanation for something that didn't click for a student in class, then I'm all for that. It is sort of like having a virtual tutor, except you can't really ask questions of the tutor.
So there you have it. Four things I think are good about the Khan Academy.
I want to say first that I really like what Vi Hart does with her videos.
Last week a few people in my twitter stream linked to a pair of videos that Vi did with Sal Khan of the Khan Academy. Here's one of them:
The point of this video (as I see it) is to discuss the difference between linear and logarithmic scales. It's a great concept that deserves discussion, and they make a decent effort to get their point across, I think.
It's just that the examples from the musical scales are filled with little inaccuracies that could easily have been corrected before they recorded the video.
Vi says at one point (somewhere around the 3:50 mark and after) "C is more like ...I don't know...let's say 300 all right so if this is 300 or 300x or just x...then this frequency would be 600..." Sal was trying to help her out since she didn't know the note frequencies that she was trying to use in her example, so he suggested just calling it "x". I think there can be more confusion introduced by trying to use the "x", but maybe that's just me. I can get past that, I guess.
There is a whole lot wrong with this small section, I barely know where to start. First, there are no units included at all. Actually, Sal is trying to help Vi by suggesting that they use 440 as a note frequency, but he actually starts to say 440 kHz. The highest frequency the human ear can hear is around the 20 kHz range for healthy (and young) people. So hundreds of kHz are way out of the range of hearing. But, Vi wants to use C and picks the number 300 to work with, no units included. This is something I don't let freshman get away with on their work, and I work hard to not let myself forget units, either.
Okay, though, we'll assume the units they are using are Hz. Why pick 300 Hz for a C? (Clearly, she forgot to look up note frequencies. I get it.) Next time Sal and Vi talk about musical notes, I'm sure they will have the note frequencies handy. Why is this important? There is an international standard for note frequencies. Without making this all about tuning and temperament and the origin of harmony, let me say that the standard says that the pitch we call A above middle C is associated with 440 Hz. Based on that standard, if we are looking at an equal temperament scale (the most commonly used scale in music), the frequency of middle C is then 261.63 Hz. Vi even comments that they have chosen a really "weird" musical scale. Yeah, no kidding.
I know that her point was that the difference in frequencies between notes separated by an octave increases as you go up the keyboard, but is it so hard to use the right frequencies? If the point of posting videos is to be educational, then why have wrong information in them? The details matter. It reminds me of the tuning fork set which have C labeled as having a frequency in multiples of 256 Hz. There have been proposed scales which are based around middle C having a frequency of 256 Hz, but no musician uses this scale today. Yet, we have countless physics and math teachers who believe that middle C on a piano has a fundamental frequency of 256 Hz. It's unfortunate, because the teachers and their students are missing an opportunity to learn more about music when they use this artificial scientific scale.
Here's a graph of the note frequencies for the equal tempered scale which I took from the page linked above. The horizontal axis is arbitrarily labeled "note number". The point of the graph is to show how the frequency difference between each successive note changes over a wide range of octaves. Each successive point on the graph represents a semitone higher in pitch than the previous note. Your ear perceives each tone as being the same "distance" in pitch as the previous tone, even though the change in frequency is not the same over the whole range.
Let's take the same graph, and make a semi-log plot. Here, I'm making the vertical axis (frequency) logarithmic.
Notice how now it is a straight line? This is what it means for something to scale logarithmically. The wikipedia page for logarithmic scale has some more examples.
Also, it was too bad that they didn't have any of the actual tones in the examples they were using. Here's a piano keyboard you can use to hear for yourself how the intervals sound similar across octaves:
Toward the end of the video (around the 6:00 minute mark) Sal makes a comment that the logarithmic scale of frequencies is not the only logarithmic aspect to sound. He refers to the "magnitude of frequencies" and then quickly amends his statement to refer to the decibel scale. Vi chimes in with an example of talking loudly and softly, then starts to say something about the "distances between loudnesses" before Sal interrupts and the video wraps up. I have no idea what "magnitude of frequencies" and "distances between loudnesses" actually mean. From the context, it's clear that they are talking about the sound pressure, or sound intensity, or even sound power. All of those are ways to quantify the amplitude of a sound wave. They all mean different things, but the relevant point is that they all scale logarithmically, in a way similar to the pitch scale discussed above.
It's too bad they didn't take the time to go into the example of sound intensity and the decibel level. It's a great example of logarithmic scaling. Here's an example from an awesome website on musical acoustics at the University of New South Wales in Australia:
Credit for the sound files and flash animation: John Tann and George Hatsidimitris
The example has broadband noise decreasing such that the sound power is halved each step. Sound power is proportional to the square of sound pressure, so if the sound power is halved, the sound pressure is reduced by the square root of 2 in each step. The above animation has the sound pressure envelope outlined in red on a linear scale If I put the sound pressure envelope on a logarithmic scale it looks like this:
I'm graphing sound pressure on the vertical axis using arbitrary units. In many cases where we are looking at measuring sound pressure we don't necessarily need the actual pressure measurement. Many times we only care how it compares to another sound pressure measurement. In this case, I know that the sound pressure level (which is different than the sound pressure!) is changing by -3 dB in each case. It doesn't matter what the original sound pressure is, since it is the relative change between the two that we are interested in.
As with the example of pitch, the sound pressure graph appears linear on a semi-log plot. This is what is meant by something scaling logarithmically. Notice that this example covers 4 orders of magnitude! That is a huge range, but it is only a part of the range of hearing for human ear. A healthy ear is sensitive to roughly 6 orders of magnitude in sound pressure: from the threshold of audibility to the ear's threshold of pain.
Since it is often unwieldy to deal with values covering several orders of magnitude, even on a semi-log plot, we convert the sound pressure measurements to sound pressure levels (SPL) which is sometimes referred to as a sound level for short. A sound level measurement is always with respect to a reference. Here's the formula for sound pressure level:
$$L_p = 20 log({p \over p_0})$$
where $L_p$ is the sound pressure level, $p$ is the sound pressure measurement and $p_0$ is the reference sound pressure. Note that the reference sound pressure does not have to be the threshold of audibility. We can use any sound pressure measurement as the reference sound pressure, then the sound pressure level is just a comparison between the two sound pressures. That's why I didn't need to know the units of sound pressure in the previous graph. I was only interested in the relative change in the sound pressure level between the sound samples. In this case, they decreased by 3 dB. I can use the sound pressure level equation to find the sound level and make a graph of that:
Note now that the graph uses a linear scale, and has the same shape as the semi-log plot of the sound pressure. The logarithmic nature of sound pressure scaling is accounted for by the definition of the sound pressure level.
And just to be clear, I really do like Vi's videos.
August 11, 2011
After the Global Physics Department chat, I'm still very confused as to how the sig fig haters instruct their students to report numerical values.
I did a quick experiment measurement tonight: I measured the length and width of a sheet of printer paper in centimeters. I came up with \( l = 27.95\space \text{cm} \) and \( w = 21.60\space \text{cm} \). Each of the measurements I believed to be within \( \pm 0.05 \space \text{cm} \).
If I want to find the area, what value should I report? \( l \times w = 603.72 \space \text{cm}^2 \) without regard to the number of figures being reported.
Now, if I use the "crank three" method to get the range of values for the area, so that I can report the uncertainty in my area calculation, I would have:
If I'm a really good student, I might remember that my instructor mentioned something about the uncertainty indicating how many digits should be reported in the answer. Maybe I even have in my notes a simple example from class. Hmmm, now I'm just confused. It seems like there should be a way to round my answer (both the value and the uncertainty) appropriately. But, how? (Remember, I'm still a beginning physics student.)
Here's my (the physics instructor, not the student, now) point: if you hate the rules or guidelines surrounding the traditional way of doing sig figs, that's fine with me. I'll even hop on that bus with you most of the way. But at some point, there has to be an actual discussion about the significance of the digits in the answers and the uncertainty. From there on out, we can choose whatever (appropriate) method we want for finding uncertainty, right?