Concurrent Engineering/Introduction

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Mission Statement[edit | edit source]

The mission of this wikibook is to create a living document that describes what concurrent engineering is and how it is useful. It is geared toward entry-level engineers in the work place.

Def. Concurrent engineering is a process in which appropriate disciplines are committed to work interactively to conceive, approve, develop, and implement product programs that meet pre-determined objectives. as defined by http://www.mne.psu.edu/lamancusa/html/ConcEng.htm

Why use wikibooks?[edit | edit source]

Wiki collaboration is a very new thing, teachers and students alike are rarely prepared for the new wiki environment. However, I will say that the benefits can be quite large: See books like Human Physiology (which has migrated to its own dedicated wiki website) and Foundations of Education and Instructional Assessment (which is in its fourth edition) for some of our best success stories. Students have the ability to learn not just by writing their own paragraphs, but also by reviewing and constructively criticizing those of others. Being asked to rephrase or clarify something can force a student to reformulate and reinforce those ideas in their own mind. So even if the going is tough, be assured that the results are very rewarding! (comments by wikiuser Whiteknight)

Product Concept[edit | edit source]

Students in ME 518 Concurrent Design of Product, conducted a brainstorm to determine what they would like in a "book" about concurrent design.

Product Data Sheet[edit | edit source]

A statement of what a wikibook on Concurrent Engineering should do

Concurrent Engineering[edit | edit source]

  • by Michael Koch, from ME519 paper...please review and edit!! will have issues as I ripped it from another paper

The concurrent engineering method is still a relatively new design management system, but has had the opportunity to mature in recent years to become a well-defined systems approach towards optimizing engineering design cycles.[1] Because of this, concurrent engineering has garnered much attention from industry and has been implemented in a multitude of companies, organizations and universities, most notably in the aerospace industry.[2]

The basic premise for concurrent engineering revolves around two concepts. The first is the idea that all elements of a product’s life-cycle, from functionality, producibility, assembly, testability, maintenance issues, environmental impact and finally disposal and recycling, should be taken into careful consideration in the early design phases.[3] The second concept is that the preceding design activities should all be occurring at the same time, or concurrently. The overall goal being that the concurrent nature of these processes significantly increases productivity and product quality, aspects that are obviously important in today's fast-paced market.[4] This philosophy is key to the success of concurrent engineering because it allows for errors and redesigns to be discovered early in the design process when the project is still in a more abstract and possibly digital realm. By locating and fixing these issues early, the design team can avoid what often become costly errors as the project moves to more complicated computational models and eventually into the physical realm.[5]

As mentioned above, part of the design process is to ensure that the entire product's life cycle is taken into consideration. This includes establishing user requirements, propagating early conceptual designs, running computational models, creating physical prototypes and eventually manufacturing the product. Included in the process is taking into full account funding, work force capability and time, subject areas that are often not intuitively considered, but extremely important factors in the success of a concurrent engineering system. As before, the extensive use of forward planning allows for unforeseen design problems to be caught early so that the basic conceptual design can be altered before actual physical production commences. The amount of money that can be saved by doing this correctly has proven to be significant and is generally the deciding factor for companies moving to a concurrent design framework.[6]

One of the most important reasons for the huge success of concurrent engineering is that by definition it redefines the basic design process structure that was common place for decades. This was a structure based on a sequential design flow, sometimes called the ‘Waterfall Model’.[7][8] Concurrent engineering significantly modifies this outdated method and instead opts to use what has been termed an iterative or integrated development method.[9] The difference between these 2 methods is that the ‘Waterfall’ method moves in a completely linear fashion by starting with user requirements and sequentially moving forward to design, implementation and additional steps until you have a finished product. The problem here is that the design system does not look backwards or forwards from the step it is on to fix possible problems. In the case that something does go wrong, the design usually must be scrapped or heavily altered. On the other hand, the iterative design process is more cyclic in that, as mentioned before, all aspects of the life cycle of the product are taken into account, allowing for a more evolutionary approach to design.[10] The difference between the two design processes can be seen graphically in Figure 1.

Fig. 1 – “Waterfall” or Sequential Development Method vs. Iterative Development Method

A significant part of this new method is that the individual engineer is given much more say in the overall design process due to the collaborative nature of concurrent engineering. While this may seem insignificant or irrelevant too many, giving the designer ownership plays a huge role in the productivity of the employee and quality of the product that is being produced. This stems from the sometimes unapparent fact that a human being that actually has a sense of gratification and ownership over their work is going to work harder and design a more robust product, as opposed to an employee that is assigned a task with little say in the general process.[11]

By making this sweeping change, many organizational and managerial challenges arise that must be taken into special consideration when companies and organizations move towards such a system. From this standpoint, issues such as the implementation of early design reviews, enabling communication between engineers, software compatibility and opening the design process up to allow for concurrency creates problems of its own.[12] Similarly, there must be a strong basis for teamwork since the overall success of the method relies on the ability of engineers to effectively work together. Often this can be a difficult obstacle, but is something that must be tackled early to avoid later problems.[13]

Similarly, now more than ever, software is playing a huge role in the engineering design process. Be it from CAD packages to finite element analysis tools, the ability to quickly and easily modify digital models to predict future design problems is hugely important no matter what design process you are using. However, in concurrent engineering software’s role becomes much more significant as the collaborative nature must take into the account that each engineers design models must be able to ‘talk’ to each other in order to successfully utilize the concepts of concurrent engineering. We will go into this more as we look at NASA’s Jet Propulsion Laboratory Team-X design methods as well as distributed CE that both rely heavily on complex software system integration to take concurrent engineering to the next level.

Extreme Collaboration[edit | edit source]

  • entered by Michael Koch, will come back and clean up....also, not sure if this is the place for this?
  • I think this might be more appropriate in the "Communication Systems" section, does anyone from the communication team agree?

Extreme collaboration, a term coined by Gloria Mark, is the method that is used to describe NASA JPL’s Team-X approach to concurrent engineering. Team-X, also known as the Advanced Projects Design Team, was formed in the mid-90s to act as an internal consulting group to NASA as well as outside entities on the design of new proposals for space missions. The overarching goal was to reduce the time necessary to complete these proposals which define all aspects of a mission. Below we’ll look at how extreme collaboration works, how it’s been successful and where limitations exist [20]. It should be noted that this method has been copied by many organizations due to its documented success of increasing productivity by up to twofold [25].

Simply put, the Team-X approach is nearly identical to concurrent engineering, but they have taken and expanded the concept by basically co-locating all the design engineers in one large room for what become intense concurrent design sessions. In this room, known as the Project Design Center, the design team utilizes networked computers that have access to data repositories, real-time design information and computer modeling and simulation tools to create the final design [24]. These design sessions generally last about three to four hours, where 10-20 engineers from various disciplines such as mechanical, structural or electrical as well as a facilitator and the customer work together to quickly and efficiently create a design proposal [23].

Fig. 2. Team-X design session in progress

Pre-planning for this sort of collaboration is extremely important to ensure that everything is in place so that the engineers can work quickly and avoid hold ups in the design process. These hold-ups, or 'latency', are ultimately what must be reduced at all costs. The reduction in latency is essentially what has made XC so successful. In traditional collaboration methods, engineers may wait for days or weeks for important design information which in turn cause project delays. In XC, latency is severely reduced by having all the pertinent engineers in one place allowing for problems to be quickly located and fixed through local collaboration. I find this very interesting since XC has focused on reducing what seems to be the most obvious time issue in design [22]. In relation to future research, it seems possible that this type of collaboration can be emulated in a distributed fashion (i.e. with engineers not local, but rather connected only through the internet).

Another important part of XC is the organizational structure. XC is designed in an egalitarian fashion. Minimal managerial intervention is required for XC to function properly. This is proven based on research that has showed that engineers waste 10% of their time waiting on management approval, as well as the proven efficiency of engineers consulting their 'knowledge networks' as opposed to their supervisors. In particular, the proven success of the egalitarian structure in engineering design is hugely important to empowering the engineers, but also giving ownership to the individual. In many realms this has been realized, such as community organizing, but seems to be missing in many engineering organizations today. Proof that a nearly 'flat-hierarchy' is a possible way to form an engineering design team seems to lend itself to a wider application through the use of distributed collaborative engineering [22], a topic that will be discussed later in the paper.

Finally, what makes the Team-X approach to concurrent engineering so successful is its utilization of the social and electronic networks. By co-locating the entire design team, extreme collaboration successfully enables a very efficient human network to evolve by removing the barriers of email, telephone and managerial approval that can often hinder complex system design. Conversations regarding one design realm might be overheard by an engineer who finds the information useful, therefore creating a more open dialogue related to design information. Also, the electronic network, enabled by connected spreadsheets, allows for engineers to quickly share data that they’ve calculated that might be pertinent to other designers work. The overall effect relates back to the reduction of latency, something that is a major problem in complex system design [21].

However, Mark points out some limitations that keep the method from being implemented in every situation. It is important that the design team be flexible to the needs of working in a large design team in a socially closed environment. This can be a major problem if team members are stubborn or happen to not get along with coworkers. The team members must also be open to their mistakes being broadcast to the entire team, as this is what will happen when an error or misjudgment is located or questioned. Finally, by nature of having up to twenty people in a room, each engineer must be able to work in an often noisy and congested room. Generally, these limitations can easily be overcome by careful selection of a team. However, they are still real considerations that must be taken into account [21].

Overall, the extreme collaboration method that is utilized by Team-X has been proven time and again to be very successful when the design engineers can be co-located into one room. The designs have been shown to be of great quality, and were able to be quickly and efficiently produced [21]. But what happens when all the parties involved are not in one location? In the following section we will look at current methods being used to connect design engineers that may be located all over the world. This topic is extremely important as more companies realize that those with the most expertise may not reside in their company, a key motivator in moving towards a more distributed design framework [4].

Distributed Concurrent Engineering[edit | edit source]

A quasi-new design method that is quickly catching on is the idea of distributed concurrent engineering. Simply put, distributed CE functions off the same basic concepts as local concurrent engineering, but takes it a step further by utilizing the ubiquitous availability of broadband internet to connect engineers, managers and users that might be scattered throughout the entire world to create a distinct product [12]. As mentioned before, one of the main reasons for this push is that as products and systems have increased in complexity, expertise is often no longer available in a local setting [4]. For this reason, distributed design has become an important aspect that any design team must take into effect in order to create a robust and functional product [26].

As the basic concepts of concurrent engineering are generally well understood, the main obstacle that faces distributed concurrent engineering (DCE) is the software layer. Currently, there is a myriad of software packages, be it CAD, CAM, FEA or others, that have allowed for huge productivity leaps as well as important insight into product functionality that was non-existent until recently. However, the ability of these software programs to communicate has been one of the largest hindrances to truly successful distributed design [12].

Many attempts have been made by numerous entities to design such distributed collaborative systems. Mohamed presents a web-based system that allows for developers to share data, communicate, modify geometry as well as ensure consistency throughout the project. A key feature is the use of ‘STEP’, an ISO standard for distributed engineering information transfer [27]. Quan and Jianmin present another web-based methodology that utilizes four different modules for product design fulfillment. These being co-design, visualization, manufacturing analysis and service modules. The authors also realized the importance of knowledge processing techniques to capture important design information for later use [12]. Yet another internet-based design environment known as ANetCoDE was developed by Nahm and Ishikawa. This networked design system was created to understand product decomposition and its ability to enhance distributed design methodologies. According to the authors, the method was very successful [28] and from inspection seems to have many similarities with ModelCenter, a proprietary program meant to facilitate project collaboration [29]. Finally, a predominately web-based software package known as VOICED has been developed with the goal of capturing design knowledge and using it to aid in future conceptual design [6]. As engineering design grows more complex, the concept of knowledge capture has become increasingly important in the conceptual design phase, allowing designers to draw off past knowledge and avoid ‘reinventing the wheel’ as well as costly mistakes later in the design & production process [30].

The underlying similarity between all the aforementioned design systems is that the basic framework resides in a web-based realm. This is important to the idea of distributed concurrent engineering because the basic methodology of sharing information must be accessible from anywhere in the world. This allows for those working on the project to quickly and easily access the information from anywhere at any time and immediately update relevant information.

However, this does not mean that the tools all have to be web-based. Currently, it would be hard to imagine having a fully functional finite element analysis software package that was running virtually. The sheer amount of resources that would have to run inside the browser would most likely end up crashing the software. Of course, this is not important as long as the local software can interface with web-based collaborative design program. This allows for a truly distributed collaborative design environment where information could be easily and quickly shared. Many of the aforementioned distributed CE frameworks have designed methods that mitigate these problems [6,12,27-30].

This progressively more distributed collaboration lends itself to the idea of a ‘levels’ framework that is presented by Sheremetov, a researcher for the Mexican Petroleum Institute. His work is focused on the petroleum industry, but has far reaching applications to engineering design. These levels are organized as follows and are meant to illustrate the natural progression that occurs in moving an engineering process towards greater distributed collaboration.

Level 1 has the goal of ensuring that engineering software is interoperable with other applications at the company or organizational level [30]. Interoperability depends on ‘open standards’, or a set of rules that govern how these software packages ‘talk’ per se. This idea has been heavily pushed in recent years by Jeff Kaplan in the realm of government, and is just as important in the engineering realm [31]. Mohamed’s use of STEP (ISO 10303) in his collaboration framework [27] is a good example of open standards at work.

Level 2 looks to ensure that the software packages can integrate data produced at different locations by storing data on the server side and calling on it when needed. This effectively reduces error and allows for much easier scalability, as well as increasing data integrity by storing it off-site.

Level 3 takes this further and allows for the applications to be accessed through a main portal, allowing for a more fluid design process and greater centralization in a virtual sense.

Finally, levels 4,5 and 6 are more ambitious. Respectively, they are looking to create virtual applications, user generated virtual applications and finally, automatically generated applications. These seem to be intuitively much more difficult, but the opportunity they will provide could be significant as it would quickly decentralize the design process and allow for greater flexibility [30].

The level system laid out above gives a good idea of the progression software must take in order to allow for a robust, distributed collaborative engineering environment. However, the above levels require working with systems that are often proprietary and closed, hence making them difficult to change and modify. This is a significant problem that appears often in the methods and software infrastructures that have been mentioned in the previous few sections. For the most part, these methods exist primarily in institutions such as corporations or universities that have varying levels of formality in the organizational structures and protocols that must be followed. However, this method of design, organization and production is slowly changing in ways that many consider to be revolutionary. In the next section, we’ll look towards the software engineering field where the ideas of more ‘open’ collaboration have become extremely popular, producing such software as Linux OS, Apache webserver and Wikipedia, to name a few.

Open Source Software Development[edit | edit source]

  • again from Michael Koch, will be cleaned up soon...also, not sure if this is the place for this?

In the last few decades, the open source design methodology, or simply open source, has grown from an idea that was somewhat undefined to what many consider a paradigm shift that is quickly replacing the archaic closed systems that have existed for decades. As Eric Raymonds so puts it, “the open source model of operation and decision making allows concurrent input of different agendas, approaches and priorities, and differs from the more closed, centralized models of development.” [32] This mode of design is in contrast to the more established institutional methods that have been mentioned throughout the paper.

So what is this ‘open source design methodology’? Essentially open source is a relatively unstructured way of developing software packages or products. Simply put, open source software development is a process in which the source code is made available through the Internet and its modification, use and distribution is promoted as essential to the success of the product [35].

Similar to concurrent engineering, the design and implementation is performed in a highly parallel fashion by individuals that may or may not be geographically disbursed. However, there are stark differences to previously mentioned methods. Specifically, those involved are not directly compensated monetarily [33]. Contributors may join and leave whenever they feel like, and have no repercussions for such actions [1]. The source code, or the inner workings of the software, is then distributed openly with the final software product [33], from which others may start separate projects based off the code, also known as ‘forking’. Forking is a somewhat rare occurrence, but has happened several times in more popular programs like Linux and Wiki software [1]. Openness has allowed for a more diverse dialogue to evolve around these projects, and is one of the keys to open source’s success.

Another way to visualize the design process is to use the analogy of the Cathedral and the Bazaar. This model put forth by Eric Raymond has been used extensively to describe how open source design works. The Cathedral represents the firm, a highly organized, closed and hierarchical institution that has the task of producing a product such as software or airplanes. In modern terms, the Cathedral can be likened to corporations such as Microsoft or Boeing. The Bazaar on the other hand represents an organizational structure that is relatively flat and open where contributors have equal say and representation [32]. The visualization of a bazaar as a bustling area where people are presenting their ideas and agendas gives a good idea of how open source works [33]. These descriptions infer a general idea of why each system has had its successes and failures.

Part of the genius of open source is that large, complex projects such as Linux OS can be divided up into smaller tasks that a large group of volunteers can work on. By leveraging this large group, the amount of brainpower that is utilized is literally unmatched in the corporate setting. Of course, there must be a well-defined review process to ensure that quality work is being submitted as well as to avoid the possibility of the code ‘forking’ [36].

Similarly, the large group of volunteers allows for the application of Linus’s Law which states: “Given enough eyeballs, all bugs are shallow.” [32] What this means is that by leveraging this network of contributors, any problems in the source code or design can be found and fixed. In the realm of software development, debugging can be a tedious job, especially when dealing with millions of lines code. By creating an asymmetrical, distributed network, this task is significantly reduced due to the parallel nature of the work [33]. The same idea can be applied to Wikipedia, where information is reviewed and debated by users all over the world, creating a robust information database that may be unrivaled in current times.

Similarly, Charles Leadbeater points out that previous organizational models rely heavily on the ideas that experts must be assembled in institutions to create products or innovation. Example institutions include Microsoft, Stanford University or the Jet Propulsion Laboratory. The idea of open source flips this idea on its head, and relies not on an institution but rather a community of developers and contributors to create innovation motivated by things such as utility, pride and enjoyment. To many, this may seem perplexing, but makes perfect sense when looking back into history at how people organized around community needs or issues [2].

In summary, open source software development is a methodology that is very different from the more established design methods previously mentioned. At its core, open source software development is an egalitarian design process that looks to give each individual freedom over what they work on and how they approach design problems. The philosophy behind this process is important to its success and presents a problem to the more established methods that have dominated organizations and institutions for decades. Because of open source’s success, the next step seems to point towards broadening the methods used here into other fields, specifically engineering design. Is it possible to accomplish the same complex system design that open source has in the physical realm? The following section will look at the current work being done in what is now being called ‘open design’ and what sorts of success and failures have been met in its relatively new existence.

References[edit | edit source]

  1. Ma, Y., Chen, G., Thimm, G., "Paradigm Shift: Unified and Associative Feature-based Concurrent Engineering and Collaborative Engineering", Journal of Intelligent Manufacturing, DOI 10.1007/s10845-008-0128-y
  2. Wikipedia article, http://en.wikipedia.org/wiki/Concurrent_engineering
  3. Kusiak, Andrew, "Concurrent Engineering: Automation, Tools and Techniques"
  4. Quan, W., Jianmin, H., "A Study on Collaborative Mechanism for Product Design in Distributed Concurrent Engineering" IEEE
  5. Kusiak, Andrew, "Concurrent Engineering: Automation, Tools and Techniques"
  6. Quan, W., Jianmin, H., "A Study on Collaborative Mechanism for Product Design in Distributed Concurrent Engineering" IEEE
  7. NASA Webpage, “The standard waterfall model for systems development”, http://web.archive.org/web/20050310133243/http://asd-www.larc.nasa.gov/barkstrom/public/The_Standard_Waterfall_Model_For_Systems_Development.htm, Nov 14, 2008.
  8. Kock, N. and Nosek, J., “Expanding the Boundaries of E-Collaboration”, IEEE Transactions on Professional Communication, Vol 48 No 1, March 2005.
  9. Ma, Y., Chen, G., Thimm, G., "Paradigm Shift: Unified and Associative Feature-based Concurrent Engineering and Collaborative Engineering", Journal of Intelligent Manufacturing, DOI 10.1007/s10845-008-0128-y
  10. Royce, Winston, "Managing the Development of Large Software Systems," Proceedings of IEEE WESCON 26 (August 1970): 1-9.
  11. Kusiak, Andrew, "Concurrent Engineering: Automation, Tools and Techniques"
  12. Kusiak, Andrew, "Concurrent Engineering: Automation, Tools and Techniques"
  13. Rosenblatt, A., and Watson, G. (1991). Concurrent Engineering, IEEE Spectrum, July, pp 22-37.