You are here

Week 6: Beaver Army breakthrough? & Bonneville Dam adventures

Monday: It was a stressful start to the week- I finally finished the plots for each of the sensors, but ended with up with some mistakes. The Jetty A salinity model-observation data comparison plot was obviously the product of a mistake, but I couldn’t figure it out until I double checked my script. I had entered the incorrect variable for the observation data (dependent variable) in the plot script so luckily I spotted it and fixed it so the graph made more sense. The model data aligned very well with the observational data for both physical parameters. However, I couldn’t find any script errors on my part when it came to the SATURN-07 salinity and temperature model-data comparisons. It was odd that both graphs had zero values for model data, so that it appeared that there were outliers, but there were far too many for it to be insignificant.
 
Tuesday: Much better day in terms of making corrections and moving forward. I checked the model data values for SATURN-07 and did find that there were zero values in the table after extracting it using this new method. I wasn’t sure what the explanation for this was, but I was advised to revert back to the M-elio model data extraction method, as it is a surface sensor, so this method would prove to be appropriate for this condition. I did this for the timeseries for which there was observational data (~May 9th - 18th 2012) and the plot projected closer to expectations. I also found that the better node layer number to use was 54 (surface layer number) rather than any other numbers because using another number produced model curves further from the observational data. There is still this inconsistency for the days of about May 10th - 14th, where the model salinity is too high and the temperature is too low in comparison to the observational data. I have looked at the other sensors SATURN-02 and Jetty A, determining that the model does a fair job of representing the conditions, so it is less likely that these areas are contributing to the error. After speaking with Grant, our hypotheses are that the cause of error could be attributed to the lack of consideration of the Chinook River, which is a freshwater river that dumps into the northern part of Baker Bay, as stated in my previous blog. Another hypothesis is that the model has issues with representing the location in the beginning stages of neap tide cycles- during neap tide cycles, there is a lower range between fluctuations, so there is less mixing and more stratification. During neap tides, there is a longer distance of salinity intrusion because it doesn’t have mixing to disrupt it. So the model could be doing an overexaggeration of the stratification and the ocean’s input could be having more of an effect than it should be, in terms of tidal cycles. In order to observe the tidal cycles, I am going to extend my timeseries into June and July (only for model data) to see if there is this jump during the beginning of each neap tide. An additional sensor that we could look at is the Beaver Army Terminal because it is a boundary with freshwater input for database 22. A useful check would be to compare model and observational data for this location to ensure that we are getting accurate input from the river, which could affect salinity and temperature.
 
Wednesday and Thursday: After compiling observational data for Beaver Army terminal through the CMOP website for May - July 2012, Grant compiled flux.th files from the boundary condition input for the model for Forecast 22. From my understanding, these data types should be very similar because the observational data is essentially what should be represented in the model as a boundary condition for freshwater river input into the specific forecast for database 22 that I’m looking at. Instead, I found that the boundary condition input matched up well in terms of phase and minimum values for each fluctuation, but the maximum values were not as high as those of the observational data. The fact that the maximum values were lower may suggest that the model is not accurately representing the river input, and impacting Baker Bay in that the heightened salinity and colder temperatures could be attributed to this lack of correct river input. In order to ensure that the model’s incorporation simply took these observational values as the boundary condition, I emailed Paul and received the response that there is actually a 2-dimensional circulation model that utilizes river discharge values from Bonneville Dam and Willamette River as boundary conditions and this 2d model is set up to compute discharge at Beaver Army and this is what is relayed to the model. I will be looking into how this information affects my findings and seeing if my hypothesis about this Beaver Army data comparison still holds.
 
Friday: The field trip to Bonneville Dam and Multnomah Falls was amazing! Getting to walk around the hatchery was fun- it felt great to finally be able to see the fish we’re actually trying to research and delineate physical habitat opportunity for. We learned more about the history and workings of Bonneville Dam, viewed the fish ladders (no salmon unfortunately), and looked in the viewing window to see lamprey! So awesome! I definitely enjoy trips like these, where I get to learn tidbits of information about the locations I’m working with and researching, and experience the atmosphere of the area. Afterward, we headed up to Multnomah Falls and I climbed, I wouldn’t even say hiked, climbed to the top. It was totally worth it, the view was spectacular. Especially for a New Mexican girl who is used to dry, flat desert most of the time...