Share Facebook Twitter Google + LinkedIn Pinterest By Matt ReeseAfter ongoing concerns about water quality, Grand Lake St. Marys was declared a distressed watershed in 2011. The lake’s notorious water quality issues generated a mountain of bad press and much of the blame was being placed on agriculture in one of the most highly concentrated livestock watersheds in the country.The “distressed” designation led to a ban on winter manure application and placed strong emphasis on other nutrient management practices for farms in the watershed. The changes were challenging, costly to implement in some cases and required significantly higher management. But so far, it seems, they are working.“We’ve seen some real changes in the watershed and the agricultural practices,” said Bill Knapke with Cooper Farms, who farms in the watershed. “Fertilizer sales have really changed dramatically since all of the livestock producers have developed nutrient management plans and are doing all of their soil testing and testing their manure. They are probably doing a better job of managing the nutrients they produce on their farms and have come to the conclusion that they didn’t need to be buying as much commercial fertilizer. We have seen a reduction in the amount of nutrients being applied in the watershed in both commercial fertilizer and manure, but we haven’t seen a decline in yields. Farmers are still producing excellent yields year in and year out and it has probably improved their bottom line.”Knapke said the nutrient management plans being implemented on farms in the watershed are both helping reduce nutrient loss and improving overall efficiency.“Nutrient management plans look at storing nutrients on the farm and when you apply them. Part of the rules was to not have manure application between Dec. 15 and the spring. Guys are doing a better job of land applying the nutrients and the are also using cover crops that can take up those nutrients and store them for next year’s crop,” he said. “When we look at sustainability in grain production and our overall carbon footprint, the better we can produce grain, the better we are at producing protein through the poultry and pork. We are looking at a lot of different things that help farmers better use nutrients.”The on-farm efforts — in addition to various measures taken by the non-ag sector in the watershed — are showing up as significantly reduced nutrient levels entering the lake, according new research from Stephen Jacquemin at Wright State University’s Lake Campus.“Grand Lake St. Marys watershed has drawn attention over the past decade as water quality issues resulting from nutrient loading have come to the forefront of public opinion, political concern, and scientific study. Grand Lake St. Marys is 250 square kilometers which makes it a smaller watershed,” Jacquemin said. “When you are trying to affect positive change and bring down nutrient loads to improve water quality, it certainly becomes more manageable when you can see the entire problem at once.”The research analyzes water quality before and after the 2011 “distressed” designation. The objective of Jacquemin’s study was to assess long-term changes in nutrient and sediment concentrations before and after the winter manure ban and other rules and best management practices including cover crops, manure storage or transfers, and buffers were implemented.The research looked at variation in total suspended solids, particulate phosphorus, soluble reactive phosphorus, nitrate, and total nitrogen concentrations from daily Chickasaw Creek water samples spanning 2008 to 2016. Chickasaw Creek drains around 25% of the watershed area. The research results are very encouraging.“We saw reductions in sediment and nutrient loading. These changes varied based on the time of year and the water flow but we saw reductions from 20% to 50% and sometimes even 60% in sediment and nutrient loading. These are extremely substantial nutrient decreases,” Jacquemin said. “It is difficult in this case to attribute this to any one practice because there were a number of good practices implemented at that same time. But we can say that since that time we have seen great reductions. The runoff is coming from fields and it was good to see everyone tackle that. The ag producers did a number of outstanding things. We certainly have a ways to go but we are heading in the right direction.”In addition, Jacquemin is looking at additional benefits from the installation of wetlands in the watershed to capture nutrients from the water.“We have been monitoring the wetlands for a year or so now. The preliminary results show extremely promising data. We are seeing 50% reductions in nitrates during summer loading periods and close to 80% reductions in phosphorus during loading periods. Most importantly we are seeing that these wetlands are able to process around 20% to 30% of the actual stream that they are filtering,” he said. “These wetlands are located on Prairie Creek and Coldwater Creek. There are still a number of tributaries that could benefit from wetlands. The two wetlands we have are doing outstanding.”Wetland installation is costly, but effective.“The cost to put in a wetland depends on the design. There is no recipe. There are no set guidelines, but any good wetland will be a series of increasingly shallow ponds that are occupied by vegetation that can filter out the nutrients as they go through,” Jacquemin said. “From a cost perspective, the wetlands of Grand Lake have cost millions of dollars, but they are effective. They work. The watershed has ultimately benefitted from their inclusion.”There is additional work being done to create wetlands out into the lake to filter more nutrients and a larger percentage of the water entering the lake.“There is a way in a large shallow lake that you can expand wetlands out into the lake. We call these our littoral wetlands. The idea is to build vegetation embankments right around the mouths of the tributaries which slow down the water and filter water so it drops out sediment before it has the chance to get out into the rest of the lake,” he said. “Retaining walls are being created in the lake to build new wetlands. We are looking at that more this summer. We have made incredible progress and we are on the right path, but it is not done. There is always something more that can be done. This is not a situation that happened overnight and we are not going to get out of it overnight.”Jacquemin’s research on water quality monitoring was published in the Jan-Feb 2018 issue of the Journal of Environmental Quality.
The horse race between the app stores has become a tedious exercise. Apple says it has 800,000 apps in the App Store. Google Play is about at 800,000 and is likely to hit the million app benchmark before iOS. But, as our readers so dutifully informed us, they do not really care. App store volume has become a non-story.Quantifying the quality of apps between iOS and Android is a different matter altogether.Quality, by its definition, is a subjective thing. Especially when it comes to mobile apps. People’s opinions are shaded by their affinity for one smartphone or another, choice of mobile operating system and varying brand loyalty. When it comes to Apple and Android, fans of each will scream at each other that their apps are better, more numerous and generally awesome. Who is right? The general perception is that iOS apps in Apple’s App Store are of better quality than their Android counterparts in Google Play. There has been really no way to quantify that though the history of the mobile app ecosystem.Until today. We can finally say, through quantifiable data, that iOS apps on aggregate are of better quality than Android. The Data Doesn’t LieWhat is this, you say? You cannot quantify quality? Well, that is true, to a certain extent. Perception of quality is clouded to an individual’s subjectivity. That did not stop application testing company uTest from setting out to answer the question. Today it released Applause, a service that uses an algorithm to crawl all live apps in the App Store and Google Play to aggregate every app’s ranking and user reviews to determine the quality of an individual app.It is some powerful data and the results are fascinating. Applause ranks every app in 10 categories and gives them an Applause Score of 1 to 100 along 1-0 categories. uTest can then look at average scores for app categories (such as games or media etc.) and yes, entire operating systems. By uTest’s metrics, iOS apps have a mean Applause Score of 68.53. Android apps average Applause score is 63.34. The margin of difference between the two is ~8%. That does not necessarily mean that any individual iOS app is going to be better than its equivalent or similar Android app. Each platform offers unique characteristics that can make the experience better or worse. Upon ReadWrite’s request, the team at uTest took a broad level look at some popular app categories and compared them between the Apple App Store and Android Google Play. As you can see with the chart below, Apple generally comes out ahead in most major categories. Note: Apple and Google do not use a common taxonomy for how they categorize apps. uTest had to map equivalent app categories to each other to come up with comparable rankings. Apps can be listed in two separate categories. Amazon Appstore for Android rankings are not included. Exclusive to ReadWriteAs you can see iOS ranks higher in nine of 11 top app categories (eight if you count weather as a virtual push between the two). Android comes out ahead in productivity and medical apps. While these are not straight one-to-one comparisons, the data is deep enough from a categorical level to give us a good understanding that iOS users are ranking app quality higher than Android counterparts. When developing the algorithm for Applause, uTest was looking for two properties.“We look for two things. One, did it have a statically different bearing on the perceived app quality, the level of user satisfaction. Second, did the keywords or key phrases that we are crawling intuitively fit into this bucket. So, for performance for example, there are really clean words like crash or freeze or hang,” uTest’s Matt Johnston said. Case StudiesFor a straighter one-to-one comparison, we asked uTest for a few case studies to highlight the difference in rankings between iOS and Android for the same app. We asked specifically that uTest compare social tablet magazine reader Zite because of its popularity and significant difference between iOS and Android versions. If you are a Zite user on iOS and Android, you know that the two are distinctly different experiences. Zite first came to the iPad before spreading to the iPhone and Android smartphones. The app may serve you the same content across operating systems, but it by no means the same experience. Zite’s iOS ranking was 66. For Android it is 62. For iOS, Zite ranks at or above the mean Applause Score in nearly every category. It ranks high in content (as it should) and well above the average in privacy. Zite’s iOS Applause Score is not surprising given that it is a well-liked app used by millions who are likely to review it kindly.Zite Applause Score for iOSZite Applause Score for AndroidThe Android app is a different story. It ranks below the mean Android Applause Score is six of 10 categories, besting the average in only content, privacy and security. An app that performs better on Android would be CBS Sports Fantasy Baseball. This is an app I use with regularity and, have to admit, it is not terrific on either platforms. Its Android score is a 12 while its iOS score is a six. Neither version hits the mean in any single category, but the Android version does perform better in staple metrics such as usability and performance. CBS Fantasy Baseball Applause Score for AndroidTrusting The Algorithm?The bottom line is that we have to step back and assess whether we trust uTest’s Applause algorithm to determine quality on both the broad and granular levels.Essentially, you are putting your trust in two things: the wisdom of the crowds (the reviewers on Google Play and the Apple App Store) and Applause’s ability to measure the subjective nature of such reviews. On its surface, the Applause algorithm is a fairly simple concept. It crawls and looks for key words that are relevant to certain categories (like “crash” for performance etc.) and has the ability to exclude certain comments in the case of astroturfing or black hat review tactics. Tags:#Android#App Economy#iOS dan rowinski What it Takes to Build a Highly Secure FinTech … Related Posts Why IoT Apps are Eating Device Interfaces Role of Mobile App Analytics In-App Engagement The Rise and Rise of Mobile Payment Technology