#39 – FUTURE TECHNOLOGIES HERE TODAY – CAPERS JONES

Capers Jones pixThis paper discusses a number of interesting technical advances that are theoretically already possible in 2014, although in fact not currently available.  Hopefully showing the software community what is technically feasible will encourage universities and larger corporations to move more quickly.

ASSISTANCE FOR THE DEAF USING GOOGLE GLASSES
It is possible today to integrate Dragon Naturally Speaking or some other voice-to-text tool into the software packages that run with Google Glasses.  This would give deaf people immediate text translations of spoken conversations.  Even better, Google translate could also be included for real-time translation from other natural languages such as Spanish, Japanese, Russian, etc.  Other assistive features included would be to provide the deaf with visual warnings for things like fire alarms, sirens, and other hazards they might not be able to identify.  Ideally Google would cooperate with major hearing associations such as Gift of Hearing to develop the needed capabilities.

ANIMATED FULL COLOR REQUIREMENTS AND DESIGN TOOL
Software applications are dynamic and have no value unless they are running.  Software applications also change over time as new features are added.  Static diagrams and text are not adequate to design dynamic systems such as major software applications.  It is technically possible to build a full-color animated design tool (even 3D is possible) that could handle issues such as performance, security, and application evolution in a dynamic fashion.  The design tool would have a variety of supplemental features for things such as simulating viral attacks and also for showing increasing entropy or complexity over time.   Current design methods such as UML and state transition diagrams would be the basis for the diagrams, but in a dynamic and moving format in full color.  Software provides very powerful design tools for engineering and other fields, but lags in sophisticated design methods for its own applications.

VIRTUAL UNIVERSITY FOR TRAINING SOFTWARE ENGINEERS AND OTHERS
It is technically possible to license one of the virtual reality rendering engines from a game company and use it to construct a virtual university campus.  Avatars of students and faculty could interact in a fashion similar to an actual university.  Additional features for the virtual university would be integral assistance for blind and deaf students; immediate translation of spoken and written materials into the languages of the students; and also social interactions among the students in something like a virtual social room.  The virtual university would also have a world-class library which essentially means access to all current on-line libraries.  Unlike a real university, the virtual university could operate 24 hours a day 365 days per year.  Major vendors might also provide access to their tools, such as project management tools, static analysis tools, cost estimating tools, etc.  Since the technology for doing this exists in 2014 it would be fairly easy to get started.

ESTABLISHING A LIBRARY OF CERTIFIED REUSABLE COMPONENTS
Custom designs and manual coding are intrinsically expensive, error prone, and inefficient.  It is technically possible to establish a library of certified reusable materials that could be used to construct applications from standard parts in a small fraction of the time required today.  A major precursor to being able to do this is a formal taxonomy that identifies the major forms of applications and also the major component parts that go into applications.  Currently there are excellent taxonomies for full applications, but no effective taxonomies that drop below that level to the specific features that comprise software applications.  Another precursor is that all materials in the library need to be certified to near zero-defect levels and to be proven to be free from virus infections and other forms of malware.  Once an application type is identified, the library would include a full bill of materials processor that would show which components would be needed and whether or not they are available from the library or would need custom development.  The essential goal is to achieve between 90% and 100% of all applications from standard reusable components rather than from custom design and manual coding.  The reusable materials would encompass reusable requirements, architecture, design, code, test cases, data structures, and user training information.

INTELLIGENT AGENTS FOR PROJECT PLANNING AND ESTIMATING
It is possible today (and actually being done by Software Risk Master) to use intelligent agents as tools for assisting in project planning and project estimating.  The process would start by identifying the specific size, type, and class of software project to be constructed, using multiple-choice menus.  Once the application has been placed on a firm taxonomy, the intelligent agents would then aggregate and summarize the results from all similar projects done over the past five years.  Further, the intelligent agents would identify common risks such as creeping requirements, quality problems due to bypassing inspections and static analysis, and schedule delays.  Assuming that perhaps 50 similar projects have already been done for every new project about to start, the intelligent agents would also identify the methodologies used that had the best quality and lowest costs; the methodologies that caused problems; the most effective programming languages, and other factors that impacted the past projects for good or for ill.  Even more the intelligent agents would suggest sources for standard reusable components that can eliminate custom design and manual coding.

SOFTWARE STARTUP VENTURE ANALYSIS ENGINE
There is a high failure rate among startup companies and in particular software startup companies.  Software Risk Master (SRM) already predicts the number of rounds of venture funding needed to build and market software applications as well as the equity dilution for the founders.  However a full startup engine would provide additional information such as guidance about small business loans; information on the best states for start up companies (Rhode Island for example is not very good); tax information; and also information on the non-technical aspects of business startups such as the probable costs of accountants, attorneys, marketing channels, advertizing over various channels, and the other complex topics that entrepreneurs may not know.  (In 2010 the state of Rhode Island unwisely entered venture funding and guaranteed almost $100,000,000 to Curt Schilling’s Studio 38 game company, which soon went bankrupt leaving the state with a huge bond debt.  The state performed no due diligence or risk analysis at all.  The author’s SRM tool was run retroactively and predicted an 88% chance of failure.  It also predicted that $100 million was not enough if maintenance and enhancements were factored into the equation.  The idea is to perform these risk predictions before money is committed at the beginning; not after the company has already failed.)

SOFTWARE OUTSOURCE CONTRACT ANALYSIS ENGINE
The author has worked as an expert witness in a dozen lawsuits where outsource vendors were charged with breach of contract due to delivering non-working software, delivering too many bugs, or not delivering a software product at all.  About 5% of outsource agreements end up in court and about 15% are terminated prematurely.  Some of the contracts seemed to be flawed.  Software Risk Master (SRM) has a special estimating mode that predicts both the odds of outsource litigation and also the probable costs for both the plaintiff and the defendant should litigation occur.  It would be desirable to use SRM prior to outsource contracts and show both the client and the vendor what would be needed to achieve a successful outcome with a low probability of litigation and how much they might have to spend on litigation in the event of failure.  The three most common problems noted during breach of contract cases were poor quality control, excessive requirements creep combined with poor change control, and extremely lax monitoring of progress by both the vendors and the clients.  All of these are avoidable problems if an optimal technology stack is deployed.

SOFTWARE QUALITY ANALYSIS AND CONTROL ENGINE
Most companies that build software depend too much on testing and often bypass defect prevention and pre-test defect removal such as static analysis and testing.  It is technically possible to build a sophisticated software quality analysis and control engine that will both predict and measure the results of any combination of defect prevention, pre-test defect removal, and test stages.  The Namcook Analytics Software Risk Master (SRM) tool has a working version of such an engine that shows defect prevention, pre-test removal, and six common forms of testing.  The same engine can also demonstrate peripheral and secondary quality approaches such as pair programming, use of ISO quality standards, and the use of certified test and quality assurance personnel versus the use of untrained development personnel.  The SRM engine predicts defect removal efficiency, defect removal costs, delivered defects, technical debt, cost of quality (COQ), and maintenance, customer support, and lifetime defect repair costs.

CYBER ATTACK SIMULATION TOOL
It is technically possible to construct an effective cyber-attack simulation tool that could be used to simulate viruses, denial of service attacks, worms and other threat vectors during software development.  This idea is to have a threat analysis engine that stays current and then use the engine as a design aid when building software applications that are likely to be attacked because they manipulate financial, medical, or classified data.  The idea is to be able to raise the immunity levels of software  to attacks and threat vectors, and also to improve the effectiveness of firewalls, anti-virus packages, and other defensive methods.

PORTFOLIO ANALYSIS ENGINE
Today in 2014 the software portfolio for a Fortune 500 company might contain 5,000 applications and more than 10,000,000 function points.  Some applications are internal; some are COTS packages; and some are cloud based.  Because portfolios are taxable assets there is a strong incentive for knowing what is in them; how much they cost to build; and how much they cost to maintain.  Additional useful information would be the ages and decay rates of all current applications.  Namcook Analytics LLC has a prototype portfolio analysis engine that already does this for several industries.  However a full portfolio analysis engine would be pre-loaded with data from at least 50 industry sectors such as manufacturing, banking, health care, insurance, state and municipal governments, and many others.  The idea of the engine would be a complete catalog of every application that included the date the application entered the portfolio, a history of changes to the applications, cyber attacks against the applications, number of users, and other key quantitative facts.  Quality and defect data would also be included, which may be necessary in the event of litigation for poor quality or breach of contract.  The portfolio analysis engine would also provide warnings of aging legacy applications whose maintenance costs are above average and might be in urgent need of renovation or replacement.  The value of a portfolio analysis engine goes up with the size of the enterprise.  For small companies in one location they can easily understand their portfolios.  But for large multi-national corporations with 25 to 50 locations in dozens of countries knowledge of a corporate or even unit portfolios seldom exists.

SOFTWARE METHODOLOGY AND BEST PRACTICE ANALYSIS ENGINE
As of 2014 there are more than 35 different software development methods including agile, extreme programming, pair programming, Rational Unified Process (RUP), Team Software Process (TSP), Merise, Prince2, waterfall and many more.  Some methods such as agile are effective for small projects but don’t scale up well.  Others such as the SEI CMMI approach work well on large systems but are too cumbersome for small companies.   Today in 2014 selecting a method resembles joining a cult more than it does making a rational technical decision.  It is technically possible to have a methodology selection engine that will use empirical data from completed projects to aid in selecting the optimum set of methodologies for large companies (who always need more than one), and the optimum methodology for specific projects.  The data for selection would include quality, schedules, costs, and maintenance information.  The author’s Software Risk Master (SRM) tool can demonstrate the results of any methodology, but the kind of engine discussed here would move upstream and predict the best methods of combinations of methods for any size project or any form of company or government agency.  As soon as the application’s size, class, and type are identified the engine would list the best methods in order of effectiveness and also show methods that have led to problems or failure for the same type of application.  The idea is to avoid major failures such as Obamacare, the Rhode Island motor vehicle system, the Studio 38 bankruptcy, the Denver Airport fiasco, and other embarrassing software failures caused by mismatches between applications and methodologies.

CORPORATE AND GOVERNMENT RISK ANALYSIS ENGINE
The Namcook Analytics master catalog of software risks include 210 specific risks.  When financial and business risks are added to the mix there about 1,000 major kinds of risks that modern companies and government agencies face:  financial risks, legal risks, software failure risks, Sarbanes-Oxley governance risks, customer dissatisfaction risks, employee morale risks, patent litigation risks, and many more.  It is technically possible in 2014 to build a corporate risk planning engine that would identify all relevant risks and suggest possible solutions for risk prevention and risk abatement.  This would be a true expert system mixed with intelligent agents that would extract current risk information from web sources.  The idea is to show every company a weighted total of the major risks they are likely to face over the next 12 months and to suggest the optimum set of risk avoidance and risk mitigation techniques.  The Software Risk Master (SRM) tool can do this today for software risks, but there are many other categories of risk such as bankruptcy, Sarbanes-Oxley violations, and threats by patent trolls that also need to be included in a corporate risk analysis engine.

PRE-SELECTED LIBRARIES OF E-BOOKS FOR KNOWLEDGE WORKERS
There are millions of books in print and it is not easy for knowledge workers to stay current with the latest advances in their field.  It is technically possible today in 2014 for intelligent agents to gather titles and reviews of all books and articles on specific topics.  Further, new materials could be added as they become available.  The catalogs would be organized by occupation groups such as project managers, business analysts, quality assurance, software engineers, test personnel, etc.  A basic library of information for each group of knowledge workers would be displayed.  Even better might be an intelligent agent abstract service that could provide highlights of the most relevant studies and materials in a condensed form.  New employees in major corporations might receive a full set of relevant ebooks as part of their employment.  Professional organizations such as the Project Management Institute (PMI) and the International Function Point User’s Group (IFPUG) might offer discounts on specific relevant titles or indeed whole collections of relevant books.

NATIONAL PROGRAMMING LANGUAGE ARCHIVES
The software industry currently has a total of almost 3,000 programming languages.  New languages such as Go and F# sharp are being developed at rates of more than two per calendar month.  Thousands of legacy applications are coded in older languages which are dead or dying such as CORAL and Mumps.  There is an urgent need for a university, government agency, or non-profit to assemble materials on all known programming languages including working compilers, debugging tools, text books, and ancillary materials.  This would be a resource for teaching maintenance programmers older languages so that critical legacy software can continue to be maintained.  The archives would be created as a public service for the software community.  While a large company such as IBM or Microsoft might do this, they both have vested interests in their own language technologies.  Therefore a neutral non-profit or a major university is the most likely organization to attempt archiving older programming languages.  Incidentally developers of new languages would be expected to provide the archive facility with working versions as new languages are released to the world.

SUMMARY AND CONCLUSIONS
The topics discussed in this short paper are all technically feasible in 2014.  However it may be some years into the future before the actual tools are fully developed and widely deployed.  Some of the ideas discussed here are further elaborated in the chapter on software development in 2049 included in the author’s Software Engineering Best Practices, McGraw Hill, 2010.  The author’s more recent books The Economics of Software Quality, Addison Wesley 2012, and The Technical and Social History of Software Engineering, Addison Wesley 2014, also look forward to 2019.

Leave a Reply

Your email address will not be published.