Chapter 17 New Trends
Applications of Artificial Intelligence (AI) and machine learning in the for-profit world are our signals to what’s about to happen in the non-profit world. We’ve already seen many examples of AI and machine learning techniques. The most exciting one is natural language generation, which is exactly what it sounds like. Using text mining and natural language processing, a computer can create language, text, and narratives that read like a human wrote them. An organization that can combine all these applications to create small chunks of tasks distributed through a single, easy-to-consume platform will differentiate itself from all the other non-profits.
Let’s see some of these new ideas in detail.
17.1 USC’s Action Center
Using the idea of one action at a time, we built an app at USC within our Salesforce mobile instance called Action Center, as seen in Figure 17.1. A fundraiser with active assignments sees some of these action items on his Salesforce mobile app.
- Donor News
- Prospect Recommendations
- Gift Alerts
- Proposal Cleanup
Donor News
We have built a crawling engine that looks for assigned prospects who may be mentioned in the news. We then perform some entity matching to ensure that the entity mentioned is the one we were looking for. Later, using all of the news items, we select and display news on the most important and relevant entities.
Prospect Recommendations
If a fundraiser’s portfolio is active with constant visit and qualification activity, we show them an unassigned prospect who they might be interested in qualifying. These recommendations are based on the characteristics of the fundraiser’s existing assigned prospects. We try to recommend a prospect that is most similar to the majority of the existing prospects. We measure similarity using the prospects’ addresses, degree departments, giving likelihood, and other factors. With the simple click of a button, a fundraiser can request an assignment or see more details on the prospect without leaving the interface.
Gift Alerts
If a donor from a fundraiser’s portfolio makes a recent gift, we show this to the fundraiser for an easy touch point with the click of a button. Often, if a donor has multiple giving areas, the fundraiser managing the relationship may be unaware of other gifts.
Proposal and Portfolio Cleanup
If a prospect has not been contacted over a certain period, we suggest that the prospect be removed. Similarly, if a proposal has been open over a year, we suggest that the fundraiser update the proposal. Again, with one click a fundraiser can complete both tasks.
17.2 Opportunity or Proposal Generation
Portfolios are obsolete.
Efforts to “optimize” a portfolio are useless.
For many years we have seen stats that fundraisers are able to contact only 50 to 60 percent of their portfolios within a year. Not only does this leave many prospects untouched, the portfolio model doesn’t create a sense of urgency. It hides the urgency.
If we were to truly optimize a fundraiser’s portfolio, like an investment portfolio, we would fill the portfolio with the prospects with the highest returns in the shortest duration possible. This can’t happen because we still need to keep qualifying new leads. There goes the optimization.
Simple math will tell us that there’s no optimization. Some sort of balance, maybe. Even the balance requires constant information searching and system updating. This is not scalable or doable with a large number of gift officers.
We can say that our return is directly proportional to the time invested in each activity. Let’s define the time spent as Tstage, a percentage of the total available time. To calculate the total return, we simply multiply the number of prospects Pstage in each stage by the time spent on each of the stages.
Total Return = PQ X TQ + PC X TC + PS X TS + PST X TST
To optimize or maximize the total return, we need to change either the number of prospects or the time spent in each stage. But the maximum return, mathematically, is only possible by spending 100% of the time with prospects in the solicitation stage (that is, ask for gifts all the time).
The optimization concept is appealing, but it increases the time burden on the relationship management staff as well as on the gift officers. And the tasks with a higher burden are likely to remain incomplete. Busy people, like gift officers, appreciate the tasks that help them get closer to their goals rather than entering and updating stuff in a system.
Here’s how AI can help us. Rather than assigning an abstract number of prospects, the combination of AI and human intelligence will create very targeted opportunities that the fundraisers can act on. These opportunities will come populated with the tentative ask date and ask amount, the likelihood that this ask will succeed, the potential areas that the prospect may be interested in, and the institution’s resources that match up with the prospect’s interest, as seen in Figure 17.2.
The fundraiser can then add the appropriate next steps to the opportunity. If the prospect isn’t responsive, the fundraiser can close the opportunity and move to the next one. If the prospect is responsive, the fundraiser can adjust the steps to move the prospect to an ask. This process will create a sense of urgency because this would be the only source of new leads. It will also help in better measurement of activities that matter and projection of future revenue rather than the obscurity hidden behind portfolios.
Portfolios are passive. Opportunities are active.
Of course, a fundraiser can add an active opportunity by himself or herself, but unlike an assignment request, an opportunity requires thinking proportionate to its seriousness.
In any case, tiny tasks with a clear outcome and action item will result in better use of one’s time—optimization in its truest sense.
If we get rid of passive portfolios, we need to find ways to feed new leads. Rather than just pushing new names, we should push opportunities complete with details such as ask amount, ask date, and a brief strategy.
How do we do so? Let’s see.
Ask amount: This is the easiest to predict with high confidence. Why? We can simply look at previous giving and predict the next gift size as we saw in the Predicting Gift Size chapter. We can also use wealth capacity to make better estimates.
Ask date: This isn’t too difficult to predict but the confidence levels would be wide on this prediction. We can use a combination of time series forecasting and RFM to predict a time frame when the donor is likely to make a gift.
Brief strategy: This is hardest of all. We will need to know many relationships and interests of the prospective donor. We will also need to know our organization’s offerings and key people.
17.3 Bio Generation
We spend many hours creating a biographical profile of a prospect or donor. This profile includes addresses, relationships, wealth, giving to our organization and others, board memberships, and a summary of all these facts.
Many organizations in a single click create such a profile after entering all the facts needed into their customer relationship management (CRM).
I have two thoughts on this process.
- Rather than us finding all these facts, can a platform be created to get this information directly from the prospect?
- If we’re collecting these facts from various sources, can we create a tool that collects, cleans, and presents the information in plain English?
You may ask, “Why would a prospect give all this information to us?” A prospect would give this information if she gets something of value in return. Think Facebook. What if we can offer a glimpse of our most important work, an insider’s look, based on the interest match? This customized look can include videos of the researchers showing their work, case studies, impact of this work, direct messaging, and so on. If the perceived value of the offering is high, we should see adoption. Scaling this should not be a problem once we have all of our assets created. This whole thing can run on automation, except where a personal response is necessary.
You may also say, “Auto-curated information can never match thoughtful human synthesis.” And you’re right. At least for the near future.
We’ve already seen investment into this area of natural language generation. Companies like Google, AP, and even the LA times have found ways to create narratives using facts and natural language generation. Do these narratives shake your brain with joy or surprise? No. That’s not the goal. Yet. But this approach helps us finish template-based tasks faster.
17.4 Web Giving
I’m sure you have visited sites that show you instant chat pop-ups. These pop-ups are connected to customer service folks on their mobiles. No matter where they are or what time it is, a representative can type in her phone and answer your questions. It is simple to apply this technique to giving pages as well. A prospective donor lands on the page. We show a pop-up. The prospective donor asks a few questions. We direct them to the proper place or giving options and, of course, we capture their information and comments in the process.
17.5 Trackers + Ads
Did you search for tickets to your trip to Hawaii and now you see advertisements for flights and hotels everywhere? How does that happen? There are multiple ways in which happens, but all involve some sort of Internet cookies that captures your search and/or browsing history. This information is then traded on ad networks or given directly to the advertisers. Facebook makes it easy for advertisers. Any website can track your visit history using a Facebook pixel. Then the website owners can pay Facebook to show ads to Facebook users with specific pixels. This is called retargetting.
Can’t we track and advertise similarly? Of course, we can, and we should. At least for testing. This is a low-cost option, compared to a telephone program, to see whether we can acquire donors.
Another similar and simple approach is to show Google ads to people who search for “charitable contribution” but only limit the ad display to visitors in your geographic region. Finding the right keywords is a good challenge. Because when I researched, I couldn’t find many searches with “charitable contribution.” We need to get in the head of our potential donors. What are they thinking when they are ready to file taxes? We also need to pay attention to the timing of the ads. You’d be surprised when people search for donations. It’s around April, when taxes are due. An ad like this would work well, don’t you think?
17.6 Event Suggestor
Let’s say that we developed a mechanism to collect the interests of our donors and prospects. These interests could be broad, like science, or narrow, like CAR-T cell therapy, but we do our best to capture specific interests. We also capture facts on our assets such as researchers, facilities, research products, and others. For higher-education institutions, this database will be rich.
Now the easy part. By creating a mashup of our constituents’ interests as well as geographic location and our assets, we can suggest highly tailored events. An event suggestion could look like Figure 17.3.
Once we have a list of confirmed attendees, using all the data (relationships, location, interests, giving capacity, and board memberships), we can come up with various seating combinations that we think could spur engaging conversations and build future relationships, as seen Figure 17.4.
17.7 Donor Platforms
Websites like Donors Choose and kiva are popular with donors because they can see the impact of their giving immediately. The Charity Navigator is similarly popular because prospective donors can search and find the “best” charities that they can support. But what if we build a platform that, using the social data of a prospective donor, can recommend non-profits for making gifts: an AI-driven mashup of Donors Choose and the Charity Navigator? This approach can yield a list of non-profits that are highly relevant to the donor.
17.8 Crawling + MTurks
In the Text Mining chapter, we saw a few examples of scraping data off the web. Similarly, we can set up web crawlers to store information from websites of interest. We can process this raw data using natural language processing to something useful. We can then use services like Amazon’s Mechanical Turk (AMT) to create small tasks that humans (called Turks or MTurks) can fulfill rapidly and cheaply.
Companies and people have used AMT to categorize paper receipts12, transcribe audio, and even write poems.13 A good use case for the non-profits is the styling of postal addresses. For example, if “ST” needs to be spelled out as “Street” and “AVE” as “Avenue”, we can provide such instructions to the Turks and within a few days we can get thousands of stylized addresses. An example of combining crawling + MTurks would be setting up a crawler to list obituaries, using NLP to find potential matches in our database, and using the MTurks to validate the matches.
17.9 Auto-Generated Emails
Similar to proposal or event creation, we can auto-generate emails, ready to be sent by the fundraisers. It seems that even with all the available information, likelihood scores, and suggested actions, fundraisers don’t contact (or record in the CRM) their prospects and donors. We can simplify and reduce the number of steps for the fundraisers by creating auto-generated emails. Gravyty, a start-up based in Boston, does exactly this.14 Gravyty’s tool “First Draft” uses machine learning on the available donor/prospect data to create emails that fundraisers can edit and send in a few clicks. When keeping in touch with prospects or donors becomes a challenge, a solution such as “First Draft” eases the burden from the fundraisers’ already loaded shoulders. When we learn our fundraisers’ preferences and have interest-based data on our donors and prospects, this type of solution has the power to create very targeted and personal touch points.
17.10 Interactive Data Analysis
As you saw in the introductory chapters, users are more likely to use your analysis if they understand it. Interactive data analysis offers one such vehicle to let the users be part of the analysis. Tableau, a data visualization software and a darling of many analysts, for its own marketing and sales team uses Tableau dashboard (Fink and Tibke 2012). Not only can the users see all the ingredients that make up the lead score, but they can also customize the analysis to answer specific questions. Using the “Visual Scoring” dashboard, Tableau reports a 22% increase in its conversion rate, which is the ratio of buyers from all leads.
Tableau is not the only software that lets you create dynamic and interactive dashboards. Shiny by RStudio is a great open-source alternative. Rich Majerus and Samantha Wren are two leading experts in the non-profit / fundraising field who have created interactive tools using Shiny and R
. Figure 17.6 shows a Shiny app, using fake data, we built for USC users to adjust weights on an activity scoring formula. By adjusting the weights, users can compare fundraisers on the activity they think is important. For example, a manager may not value visits as much as qualification. He or she can decrease the weight on visits to zero and increase the weight prospects qualified to a higher number. Now, the fundraiser names on the chart will shift to show new scores (that is, the names farthest to the right will have more qualification activity).
If you enjoyed reading this book, can we ask you for a small favor? It won’t cost you anything, but would help us tremendously.
- Run
source("http://arn.la/shareds4fr")
in yourR
console to share the book with your network
- Leave a review on Amazon
With our sincere thanks,
Ashutosh and Rodger
Abadi, Martín, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, et al. 2016. “Tensorflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems.” arXiv Preprint arXiv:1603.04467.
Amatriain, Xavier, and Justin Basilico. 2011. “Netflix Recommendations.” http://bit.ly/2tnLnKt.
Amblee, Naveen, and Tung Bui. 2011. “Harnessing the Influence of Social Proof in Online Shopping: The Effect of Electronic Word of Mouth on Sales of Digital Microproducts.” International Journal of Electronic Commerce 16 (2). Taylor & Francis: 91–114.
Ashton, Nick, Simon G Lewis, Isabelle De Groote, Sarah M Duffy, Martin Bates, Richard Bates, Peter Hoare, et al. 2014. “Hominin Footprints from Early Pleistocene Deposits at Happisburgh, Uk.” PLoS One 9 (2). Public Library of Science: e88329.
Beveridge, Andrew, and Jie Shan. 2016. “Network of Thrones.” Math Horizons 23 (4). JSTOR: 18–22.
Bischl, Bernd, Michel Lang, Lars Kotthoff, Julia Schiffner, Jakob Richter, Erich Studerus, Giuseppe Casalicchio, and Zachary M. Jones. 2016. “mlr: Machine Learning in R.” Journal of Machine Learning Research 17 (170): 1–5. http://jmlr.org/papers/v17/15-066.html.
Bore, Inger-Lise. 2011. “Laughing Together? TV Comedy Audiences and the Laugh Track.” The Velvet Light Trap, no. 68. University of Texas Press: 24–34.
Box, G. E. P. 1976. “Science and Statistics.” Journal of the American Statistical Association 71: 791–99. http://www.tandfonline.com/doi/abs/10.1080/01621459.1976.10480949.
Breiman, Leo, Adele Cutler, Andy Liaw, and Matthew Wiener. 2015. RandomForest: Breiman and Cutler’s Random Forests for Classification and Regression. https://CRAN.R-project.org/package=randomForest.
Bresler, Alex. 2016. ForbesListR: Access Forbes List Data.
Brynjolfsson, Erik, Lorin Hitt, and Heekyung Kim. 2011. “Strength in Numbers: How Does Data-Driven Decision-Making Affect Firm Performance?”
Bult, Jan Roelf, Hiek Van der Scheer, and Tom Wansbeek. 1997. “Interaction Between Target and Mailing Characteristics in Direct Marketing, with an Application to Health Care Fund Raising.” International Journal of Research in Marketing 14 (4). Elsevier: 301–8.
Burnham, D. R., K. P.; Anderson. 2002. Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach. Springer-Verlag.
Chassin, David P, and Christian Posse. 2005. “Evaluating North American Electric Grid Reliability Using the Barabási–Albert Network Model.” Physica A: Statistical Mechanics and Its Applications 355 (2). Elsevier: 667–77.
Chen, Tianqi, Tong He, Michael Benesty, Vadim Khotilovich, and Yuan Tang. 2017. Xgboost: Extreme Gradient Boosting. https://CRAN.R-project.org/package=xgboost.
Cheng, Heng-Tze, Lichan Hong, Mustafa Ispir, Clemens Mewald, Zakaria Haque, Illia Polosukhin, Georgios Roumpos, et al. 2017. “TensorFlow Estimators: Managing Simplicity Vs. Flexibility in High-Level Machine Learning Frameworks.” In Proceedings of the 23rd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, 1763–71. New York, NY, USA: ACM. http://doi.acm.org/10.1145/3097983.3098171.
Chua, Hannah Faye, J Frank Yates, and Priti Shah. 2006. “Risk Avoidance: Graphs Versus Numbers.” Memory & Cognition 34 (2). Springer: 399–410.
Cialdini, Robert B. 1987. Influence. Vol. 3. A. Michel Port Harcourt.
Cleveland, William S, and William S Cleveland. 1985. The Elements of Graphing Data. Wadsworth Advanced Books; Software Monterey, CA.
Cohen, W. W. 1995. “Fast Effective Rule Induction.” Proceedings of the 12th International Conference on Machine Learning, 115–23. http://citeseer.ist.psu.edu/cohen95fast.html.
Colvin, Geoff. 2009. Talent Is Overrated. Findaway World.
Croxton, Frederick E, and Roy E Stryker. 1927. “Bar Charts Versus Circle Diagrams.” Journal of the American Statistical Association 22 (160). Taylor & Francis Group: 473–82.
DeGraff, Jeff. 2011. Innovation You: Four Steps to Becoming New and Improved. Ballantine Books.
Few, Stephen. 2006. Information Dashboard Design: The Effective Visual Communication of Data. O’Reilly Media.
———. 2008. “Dual-Scaled Axes in Graphs.” http://bit.ly/2e5oZle.
Fink, Elissa, and Wade Tibke. 2012. “Visual Scoring – the 360° View.” https://www.tableau.com/whitepapers/visual-scoring-360.
Golbeck, Jennifer, and James A Hendler. 2004. “Reputation Network Analysis for Email Filtering.” In CEAS.
Golombisky, Kim, and Rebecca Hagen. 2016. White Space Is Not Your Enemy: A Beginner’s Guide to Communicating Visually Through Graphic, Web & Multimedia Design. A K Peters/CRC Press.
Guimera, Roger, Stefano Mossa, Adrian Turtschi, and LA Nunes Amaral. 2005. “The Worldwide Air Transportation Network: Anomalous Centrality, Community Structure, and Cities’ Global Roles.” Proceedings of the National Academy of Sciences 102 (22). National Acad Sciences: 7794–9.
Hahsler, Michael. 2017. ArulesViz: Visualizing Association Rules and Frequent Itemsets. https://CRAN.R-project.org/package=arulesViz.
Hahsler, Michael, Sudheer Chelluboina, Kurt Hornik, and Christian Buchta. 2011. “The Arules R-Package Ecosystem: Analyzing Interesting Patterns from Large Transaction Datasets.” Journal of Machine Learning Research 12: 1977–81. http://jmlr.csail.mit.edu/papers/v12/hahsler11a.html.
Heath, Chip, and Dan Heath. 2007. Made to Stick: Why Some Ideas Survive and Others Die. Random House.
Holmes, Thomas J. 2011. “The Diffusion of Wal-Mart and Economies of Density.” Econometrica 79 (1). Wiley Online Library: 253–302.
Holte, Robert C. 1993. “Very Simple Classification Rules Perform Well on Most Commonly Used Datasets.” Machine Learning 11 (1). Springer: 63–90.
Hornik, Kurt, Christian Buchta, and Achim Zeileis. 2009. “Open-Source Machine Learning: R Meets Weka.” Computational Statistics 24 (2): 225–32.
Hothorn, Torsten, Kurt Hornik, Carolin Strobl, and Achim Zeileis. 2017. Party: A Laboratory for Recursive Partytioning. https://CRAN.R-project.org/package=party.
Jacomy, Mathieu, Tommaso Venturini, Sebastien Heymann, and Mathieu Bastian. 2014. “ForceAtlas2, a Continuous Graph Layout Algorithm for Handy Network Visualization Designed for the Gephi Software.” PloS One 9 (6). Public Library of Science: e98679.
Jed Wing, Max Kuhn. Contributions from, Steve Weston, Andre Williams, Chris Keefer, Allan Engelhardt, Tony Cooper, Zachary Mayer, et al. 2017. Caret: Classification and Regression Training. https://CRAN.R-project.org/package=caret.
Jørgensen, Bent, and Marta C Paes De Souza. 1994. “Fitting Tweedie’s Compound Poisson Model to Insurance Claims Data.” Scandinavian Actuarial Journal 1994 (1). Taylor & Francis: 69–93.
Kahneman, Daniel. 2011. Thinking, Fast and Slow. Macmillan.
Koenker, Roger. 2005. Quantile Regression. 38. Cambridge university press.
———. 2017. Quantreg: Quantile Regression. https://CRAN.R-project.org/package=quantreg.
Kohavi, Ron, and Rajesh Parekh. 2004. “Visualizing Rfm Segmentation.” In Proceedings of the 2004 Siam International Conference on Data Mining, 391–99. SIAM.
Kuhn, Max, and Ross Quinlan. 2017. C50: C5.0 Decision Trees and Rule-Based Models. https://CRAN.R-project.org/package=C50.
Langfelder, Peter, and Steve Horvath. 2008. “WGCNA: An R Package for Weighted Correlation Network Analysis.” BMC Bioinformatics 9 (1). BioMed Central: 559.
Lantz, Brett. 2013. Machine Learning with R. Packt.
Liaw, Andy, Matthew Wiener, and others. 2002. “Classification and Regression by randomForest.” R News 2 (3): 18–22.
Maier, David. 1983. The Theory of Relational Databases. Vol. 11. Computer science press Rockville. http://web.cecs.pdx.edu/~maier/TheoryBook/TRD.html.
Malthouse, Edward C, and Robert C Blattberg. 2010. “Can We Predict Customer Lifetime Value?” In Perspectives on Promotion and Database Marketing: The Collected Works of Robert c Blattberg, 245–59. World Scientific.
Mankins, Michael, and Lori Sherer. 2014. “Help Reluctant Employees Put Analytic Tools to Work.” Harvard Business Review. https://hbr.org/2014/10/help-reluctant-employees-put-analytic-tools-to-work.
Matejka, Justin, and George Fitzmaurice. 2017. “Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics Through Simulated Annealing.” In Proceedings of the 2017 Chi Conference on Human Factors in Computing Systems, 1290–4. ACM.
McCarty, John A, and Manoj Hastak. 2007. “Segmentation Approaches in Data-Mining: A Comparison of Rfm, Chaid, and Logistic Regression.” Journal of Business Research 60 (6). Elsevier: 656–62.
Meyer, David, Evgenia Dimitriadou, Kurt Hornik, Andreas Weingessel, and Friedrich Leisch. 2017. E1071: Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), Tu Wien. https://CRAN.R-project.org/package=e1071.
Milborrow, Stephen. 2017. Rpart.plot: Plot ’Rpart’ Models: An Enhanced Version of ’Plot.rpart’. https://CRAN.R-project.org/package=rpart.plot.
Nandeshwar, Ashutosh R. 2006. “Models for Calculating Confidence Intervals for Neural Networks.” Master’s thesis, West Virginia University Libraries.
Newman, Mark. 2010. Networks: An Introduction. Oxford University Press.
O’Neil, Cathy. 2013. On Being a Data Skeptic. O’Reilly Media, Inc.
Page, Lawrence, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. “The Pagerank Citation Ranking: Bringing Order to the Web.” Stanford InfoLab.
Perlich, Claudia, Saharon Rosset, Richard D Lawrence, and Bianca Zadrozny. 2007. “High-Quantile Modeling for Customer Wallet Estimation and Other Applications.” In Proceedings of the 13th Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, 977–85. ACM.
Porter, Michael E. 1996. “What Is Strategy.” Published November.
Prickett, Tricia, Neha Gada-Jain, and Frank J Bernieri. 2000. “The Importance of First Impressions in a Job Interview.” In Annual Meeting of the Midwestern Psychological Association, Chicago, Il.
Ripley, Brian. 2015. Class: Functions for Classification. https://CRAN.R-project.org/package=class.
Ryan, Richard M, and Edward L Deci. 2000. “Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being.” American Psychologist 55 (1). American Psychological Association: 68.
Scott, John. 2017. Social Network Analysis. Sage.
Smith, Paul. 2012. Lead with a Story: A Guide to Crafting Business Narratives That Captivate, Convince, and Inspire. AMACOM Div American Mgmt Assn.
Stadtler, Hartmut. 2015. “Supply Chain Management: An Overview.” In Supply Chain Management and Advanced Planning, 3–28. Springer.
Steele, Julie, and Noah Iliinsky. 2010. Beautiful Visualization: Looking at Data Through the Eyes of Experts. O’Reilly Media.
Tang, Yuan, JJ Allaire, RStudio, Kevin Ushey, Daniel Falbel, and Google Inc. 2017. Tfestimators: High-Level Estimator Interface to Tensorflow in R. https://github.com/rstudio/tfestimators.
Therneau, Terry, Beth Atkinson, and Brian Ripley. 2017. Rpart: Recursive Partitioning and Regression Trees. https://CRAN.R-project.org/package=rpart.
Tufte, Edward R. 2001. The Visual Display of Quantitative Information. Graphics Pr.
Tukey, J.W. 1977. Exploratory Data Analysis. Mass: Addison-Wesley Pub. Co.
Tweedie, MCK. 1984. “An Index Which Distinguishes Between Some Important Exponential Families.” In Statistics: Applications and New Directions: Proc. Indian Statistical Institute Golden Jubilee International Conference, 579:6o4.
Verhoef, Peter C, Penny N Spring, Janny C Hoekstra, and Peter SH Leeflang. 2003. “The Commercial Use of Segmentation and Predictive Modeling Techniques for Database Marketing in the Netherlands.” Decision Support Systems 34 (4). Elsevier: 471–81.
Voehl, Frank, and H James Harrington. 2016. Change Management: Manage the Change or It Will Manage You. Vol. 6. CRC Press.
Warnes, Gregory R., Ben Bolker, Thomas Lumley, Randall C Johnson, and Randall C. Johnson. 2015. Gmodels: Various R Programming Tools for Model Fitting. https://CRAN.R-project.org/package=gmodels.
Wickham, H. 2011. “The Split-Apply-Combine Strategy for Data Analysis.” Journal of Statistical Software 40 (1): 1–29. https://www.jstatsoft.org/article/view/v040i01/v40i01.pdf.
Wickham, Hadley. 2009. Ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. http://ggplot2.org.
Wilkinson, Leland. 2006. The Grammar of Graphics. Springer Science & Business Media.
Witten, Ian H., and Eibe Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. 2nd ed. San Francisco: Morgan Kaufmann.
Wong, Dona M. 2013. The Wall Street Journal Guide to Information Graphics: The Dos and Don’ts of Presenting Data, Facts, and Figures. W. W. Norton & Company.
Yang, Amoy X. 2004. “How to Develop New Approaches to Rfm Segmentation.” Journal of Targeting, Measurement and Analysis for Marketing 13 (1). Springer: 50–60.
Yang, Yi, Wei Qian, and Hui Zou. 2016. TDboost: A Boosted Tweedie Compound Poisson Model. https://CRAN.R-project.org/package=TDboost.
———. 2017. “Insurance Premium Prediction via Gradient Tree-Boosted Tweedie Compound Poisson Models.” Journal of Business & Economic Statistics. Taylor & Francis, 1–15.
Zacks, Jeff, and Barbara Tversky. 1999. “Bars and Lines: A Study of Graphic Communication.” Memory & Cognition 27 (6): 1073–9.
References
Fink, Elissa, and Wade Tibke. 2012. “Visual Scoring – the 360° View.” https://www.tableau.com/whitepapers/visual-scoring-360.
The supposedly “smart” platform to scan receipts used MTurks: http://bit.ly/2DLpr1s↩
The Outline: http://bit.ly/2kRxIJ9↩
Huffington Post: http://bit.ly/2BJzqXD↩