Goals, Scripts, and Process within Design

What are design goals?

goal n [ME gol boundary, limit] (1531) 2: the end toward which effort is directed: aim n (14c) 1: a determination to act in a certain way: resolve 2: import, significance 3 a: what one intends to do or bring about b: 5: concept; esp: a concept considered as the product of attention directed to an object of knowledge 6

syn intention, intent, purpose, design, aim, end, object, objective, goal mean what one intends to accomplish or attain. intention implies little more than what one has in mind to do or bring about <announced his intention to marry>. intent suggests clearer formulation or greater deliberateness <the clear intent of the statute>. purpose suggests a more settled determination <being successful was her purpose in life>. design implies a more carefully calculated plan <the order of events came by accident, not design>. aim adds to these implications of effort directed toward attaining or accomplishing <her aim was to raise film to an art form>. end stresses the intended effect of action often in distinction or contrast to the action or means as such <willing to use any means to achieve his end>. object may equal end but more often applies to a more individually determined wish or need <his constant object was the achievement of pleasure>. objective implies something tangible and immediately attainable <their objective is to seize the oil fields>. goal suggests something attained only by prolonged effort and hardship <worked years to reach her goals>. adj

How can design goals be described? Within design we are trying to resolve conflicts between numerous design goals. The goals are constantly updated based upon addition input or insight. How can we represent a flexible goal? We can assume that at any given phase in the process our different design goals have a representable state. What are some potential design goals:

  1. Conceptual intentions can incorporate issues such as theoretical relationships, or underlying base ideas for example a parti diagram ( which is a conceptual relationship built from multiple design goals).
  2. Historical: Based upon precedence information, meaningful examples, or influences on technique.
  3. Experiential
  4. Structural
  5. Spatial
  6. Aesthetic considerations might include issues such as composition, individual style, or formal manipulations techniques (which are scripts), but the method for comparison is judged against a design goal frame.
  7. Contextual charateristics approximate the interaction of the building within a larger context.
  8. Pragmatic
  9. Social
  10. Economic
  11. Programmatic

When one tries to approximate defintions for something as vague as a design goal, it is important to see relational connections between goals. These relationships are implied by our understanding of how we design. You might offer "How can you have a contextual goal if it is not in directly derived from historical goals?" and this is the point exactly, we manufacture are goals and scripts in such a way to allow for us to design. If you try and think of a contextual goal, your mind might immediately search for a relevant historical example or a previously solved (or currently being solved solution), when an example is found the script for the problem is also retrieved and you can explain the relevance of the situation based upon previous scripting techniques leading to a solution which in the end has the potential to become another design goal example. These modes of scripting goals can be very different from designer to designer, but I believe our unique techniques can be approximated into script and goal formats. This understanding of goal use should be viewed as a device within the process of exploring design. The goals are more instigators of mental processes. For example, if the current goal state allows for the spatial goal to rise and delagate tasks, it may signal constituent agencies of peceptive experience, spatial precedent examples, and signal the designers visual system to view the drawings quite differently than if the pragmatic goal was dominating.

Goal Filtering? Spatial Order Weighting? What are scripts?

When we sit down at a drawing board or a computer or to sketch we follow procedural scripts. These scripts update themselves based upon the tasks at hand but the probably have similar templates. These scripts are very different from the traditional notion of scripts as described within the fields of anthropology or artificial intelligence. Abstract

Abstract

I have a box, and it can design architecture. The only problem is that the system is so internalized that it won't share or communicate the process of how it is done.

The object of this experimental paper is to collate previously completed projects which explore computational design as the vehicle for wreslting and understanding these issues. I will place these experiments within a critical context, expand on their shortcomings, and speculate about the future of self-generating systems within design. The paper is segmented (as of now) into four sections:

I will try to offer evidence to suggest that procedural algorithms can be written (and in some conditions have to be written) to fully explore the methods and techniques used in design.

Back to the Table of Contents

Questions???

Questions in relationship to CAAD and machine creativity


We involve ourselves with the search and exploration of architectural ideologies and methods, but inherently, the end of our discourse leads to humanistic interpretation -- what are the principle concerns of our profession? The lack of leadership and hierachical constraints is promising, but currently only contributes to the floundering and 'rehashing' of conventional ideas.

The largest problem in working within computational structures is the heavy handedness of 'objective' problem-solving. Exploring creative structures could be considered oxymoronic. Is it possible to impose a structure to creativity? I believe the problem is larger and at the same time comprised of constituent parts which are 'solvable'. Where does this leave me? Quite a forboding lifes exploration into what might be possibly the wrong avenue of discovery. If we begin to analyze computational strucures we first wrestle with the question of mimicking and reproduction. There are two camps within this area:

01 Develop systems which contiguously emulate human systems
02 Construct systems which potentially simulate human systems but are based upon nonrelational correspondances.


I would argue that a creative system does not have to be modeled after the human brain, but can emulate significantly compatible representations. More specifically these systems have the potential to create alternative contructions which move past conditional limitations imposed by conventional constraints. One could write a dissertation concerning conditional limits and conventional constraints but briefly they consist of the large 'blocks' of standardized knwledge and collective psychology which emulate and simultaneoulsy construct a-priori knowledge. Perhaps these systems have developed because of their evolutionary advantages concerning communication but dissmenatating on levels more primitive than 'language' itself. Anyways---

Back to 'heavy-handed' objectivism. Most objective understanding of aesthetics is driven by systematic regularizing methodical processes. As architects we 'competantly' design wielding various perfomance based criteria in our evaluation of our process. How is it possible for us to visually 'scan' drawings and derive 'instantaneous' conclusions? How much intuition is involved? Intuition mapped onto computational understanding might be synonomous with a 'quick search through highly organized memory'. Intutions can be wrong, but this is also beneficial (more on this later). Laborious design, design which is well thought out and compared with numerous criteria based systems, closely relates to 'exhaustive search mechanisms through all mapped memory spaces'. These two processes (computational) are not creative but evaluative. Creative manifestations arise from sythetic constructions of 'random' priority systems. These developmental systems, although creative, in the end have to be judged. Random (not in a conventional understanding) because hierarchical memory structures allow us convenient access to high priority information. We could crudely replicate this by comparing our memory to thousands of stacks of books. Each stack contains any number of books. Each stack is prioritized through a function of time and use. Which is to say the information in a stack which is most useful is located at the top, but it can also be information learned years ago. We operate by 'selecting' from various stacks. Higher-order thinking involves delving deeply into the stacks constructing relationships and processing useful information. The task I believe is more relational rather than differential. This is arguable but through analysis we have to recognize the properties of objects before the differences are revealed. For example:

I have an apple and an orange
One might argue they are different, but first you have to recognize the properties of each. The orange is orange, the apple is red. The orange is not red. The apple is not orange. These are differences, but an understanding of each first has to be made and conserdered. One might argue that it is easier to represent the differences of localized associative comparisons--I can find more differences between the objects than similaries--but consider the idea in a generalized scope. Absolute comparison involves searches through memory space at a larger scale. If I ask the question--What is an orange different from? You would develop lists of meaningless examples - a car, a book, school, computers..etc. If I ask what is the orange similar to, one could quickly find relavent examples - the sun, a ball, florida..etc. Through this search you are comparing the properties of the object orange with similar properties, and cataloging the evidence based upon the sharing of one or more properties. Are all properties stored? I don't know....we could get very technical into perception and memory storage here but maybe later...

What does this have to do with ARCHITECTURE?

We are back at the question of machine based creativity within architecture. I have been working on self-generating architectural objects for a while now. Part of the investigation involved random searches of possible formal generations. Other involved searching through all possible representations of constrained systems. The largest problem comes back to the idea of aesthetics, usefullness and purpose. My thesis will involve simialr ideas--but probably be a system which through evolution discovers architecture...but of course the system has to have an evaluative concern--whether imposed (externally) or system based. more to come... Questions 2 Pre-Thesis-Prep
What are the problems with presenting design and creativity as black box abstractions?

01 We limit our expectations into a classical (Romantic) way of experiencing the world. The dual nature of classical representation offers a split distinction in our existence logical or idiosnycratic.

02 We disregard, or rather reduce our appropriations from the related disciplines of coginitive science, psychology and artificial intelligence. With our increased understandings of these discources we remove are previously unchallenged marriage to untested philosophical representations of aesthetics.

03 The profession itself suffers in relationship to technological and scientific advances which will dramatically challenge the status quo of architectural production.
Why is it commonly supposed that creativity and design is a mystery and differs funamentally from other human endeavours?

01 Part of the problem is with symbolic abstraction within the process of design or other creative activities. How is designing different from other expert decision making processes (in science or management)? Other disciplines replicate, and explain themselves through syntactic language structure. Within art, alternative sybols are used which are less likely to be explained solely by language.

02 Although the underlying theme between divergent activites may be the same, the result being understanding, the similarity does not extend into the relationship of process. Perhaps the objective of all learning is understanding. This is in accord Kolb's idea of understanding as the appropriation of experience.

03 What one has to consider in evaluating activities is not the common misnomer between reasoning which is intellectual and logical and resaoning that is intuitive but the media, subject matter and class of experiences which will be different between areas of expertise and disciplines.
How does individual style develop, and how is it well suited to procedural investigations?

01 This is another huge question
02 Part of the answer deals in the designers expert ability to turn complex problems into tractible and tangable ones. This is usually accomplished through very specific (yet flexible) procedures which have direct relationship to individual style (as described by Chan). Ackerman describes the characteristics of a work of art which contribute to a definition of style as: conventions of from and of symbolism, material, and techniques.
Why is design particularly difficult (but well suited) to establish computationally? Although this may appear to be contridictory

01 Designing is an activity which can take place at any stage of a process. Schon descibes decision making and discovery in similar terms as the projection of metaphors which we are familar onto new, unfamiliar situations. The process descibes itself as trying to map existing partial design conditions to expected design situations. The problem arrises from the previous question of the establishment of expertise and style. A large amount of knowledge (data) has to be well stored and easily accessible in order to be relevent and usefull in reference to design, this information must also exhibit "fluid" principles so that it can be updated and changed. Draft 01

A few comments relating to publications

I would like to trace a temporary boundary in which to to situate my research. By no means is it an attempt to exhibit mastery in the following discources, it is merely a plotted function of interest and exploration.

Thinking

Thinking about data


Why do I think there is such a strong connection between process, thinking and design? The explanation lies within the boudaries of procedurally explaining the relationship between data, the organization of that data, and the algorithms for manipulation. The necesary atomic processes for these manipulations are dance around the concepts of growth, decay, and mutation. The most pervasive metaphor to describe these processes come from biological understanding of genetics. Recent developments with genetic algoritms and genetic programming for computational generation of solutions rather than the empirical development of a solution. The theory of computational evolution arrises from generating enough tests which can be compared and developing offspring from the solution 'gene' pool. There has to be an approximation of this code when we use design scripts and goals. We find within the works of ourselves a common reognizable thread. Whether in the compositional partii layout, the process, method, or techniques employed all add to this as an codified understanding.

The thought processes within design can in turn be described as amorphous and everchanging. Could these processes have a 'base code' and grow, decay, and mutate depending on use interaction and strength within the survival of the fittest of our mind? The evolutionary position could be quite strong, but I am not positing this argument as truth. To borrow the underlying ideology of another discourse allows an alternative method for explaining, and perhaps gaining further insight in our own profession of 'design thinking'. The experiment takes another step further by implementing the process within itself on itself. This is what I term generative architecture.

Some discussion about genetic algorithms

How can computers learn to solve problems without being explcity programmed? In other words, how can computers be made to do what is needed to be done, without being told explicitly how to do it?

What are some ways at approaching the problem in a different, perhaps idiosyncratic, way?

  1. Correctness
    Science, mathematics and engineering place their emphasis, almost always, at finding the correct answer. Imprecise answers and small errors are unacceptable to these disciplines, but are potentially usefull for the field of architecture and design. Erroneaous attempts, and malformed feedback can lead to creative unexpected novel solutions.
  2. Consistency
    Inconsistency is not acceptable to the logical mind. But I believe it to be an integral part of examining the solution space of a given design problem. We need to explore more vigorously inconsistent and contradictory approaches. This greater diversity can lead alternitive solutions more quickly.
  3. Justifiability
    Various attempts have been made to logically and rationally describe the design process. In a majority of these examples we find a postereori explanations which are completly valid for any design description. Recognizing and focusing the search space initially leads to certain pigeon holing of the process.
  4. Certainty
    I subsrcibe to the chance camp (if you already haven't noticed), Non deterministic models open up the spectrum to a wider experience of potential solutions. But one also has to recognize that within chance (or random) models nothing is guarenteed.
  5. Orderliness
    Prescriptive procedural methods tend to limit the whole explanation of a design problem. We have to enable methods, scripts, and processes which are not determined by their order of operation. Certainly are minds do not follow 'exact' preconscribed patterns of operationg especially within a design problem. We attack the problem at hand based upon the given knowledge and experience at that time.
  6. Decisiveness
    We all know that a design solution does not contain a well-defined termination point. It is an ever changing and growing process, mostly guided by time-lines, and economy. But there is always stages of improvements allowing for time.

One example currenly being explored is the development of primitive shapes which are derived or invented through the process of expirementation see Koza's work (Koza, 1992:94).

What are some ways of manipulating evolved states?

Firstly, what is the structure of an evolved state?
The image below represents a similar structure for creating shapes as explained by the autonomous generation program. I am trying to create a genetic algoritm for the generation of architectural objects. The initial genetic code of the following objects consist of six randomly selected lines of four different lengths. These lengths are expressed by the colors red, green, blue, and yellow. The code also consists of the connective orientation, in this example the four coordinate directions. The object is to derive a genetic algorithm for creating closed shapes. This of course is not a difficult problem, and there are existing algorithms to explain the process for creating such shapes (see previous work). The point of this exercisse is to create a system which derives the algorithm through experimentation. The genetic code of the shapes learn through successive generations through a means of recombining the code of 'fit' examples. Through self experimentation and self generation the system learns how to represent shapes. For example, questions such as what is the largest possible area if only given X number of lines, alternatively, minimal, shapes with most corners, etc.

How does recombination happen?

Observed fitness ratings after initial generation.
Possible Mating pool resulting from applying the operation of fitness-proportinate reproduction to the initial random population.

	Generation				Mating Pool (reproduction)
	0	
i	String		Fitness  (fxi/sum(fxi))	M.P.		Fitness
1	4.2 4.2 8.0	0+26+-10   .12		2.0 4.2 8.0	54
	2.0 6.1 2.3	(16)			4.3 6.2 6.1

2	2.0 4.2 8.0	24+30+0    .40		2.0 4.2 8.0	54
	4.3 6.2 6.1	(54)			4.2 6.2 6.1

3	6.2 2.1 6.0	12+30+0    .31		6.2 2.1 6.0 	42
	6.3 4.0 6.1	(42)			6.3 4.0 6.1

4	2.3 8.0 6.1	0+28+-6    .16		2.3 8.0 6.1	22
	2.3 4.2 6.1	(22)			2.3 4.2 6.1

Total			134					172
Worst			16					22
Average			33.5					43
Best			54					54
The chart above consists of two parts; A: The initial generation 0. For this example we are using only four objects which are randomly created. A more realistic example would consist of between 200 and 1000 randomly generated objects. Of course the search space is determined by the permutation of the variables, this example consists of a (4^12 resulting in 16,777,216 possibilties), similarly we can arrive at an approximation of a best fit solution of a generated set through (200x30 -- 1000x50: 6000 to 50000 searches with 30-50 generations). The fitness for the above example is determined by the length of all lines within a genetic code, plus the area (if any, of bounding polygon) minus the minimal implied length to close the object (if the object is not closed). The system develops in reference to fitness proportiante measurements, therefore the fitness function is of exceptional importance. The fitness function is exploited by the population in succesive generations. An interesting aside: if there are any bugs, or false intuitive assumptions within the fitness routine, it will certainly be found be the population of developing objects. What is appealing about the fitness routine? One primary concept of the scoring fitness is the idea of partial credit. This concept is rarley used within computer science--usually an algorithm communicates whether or not a solution was found. Throughout the generations partial solutions to various problems are judged. The evolution of a solution is primarily determined by the extent and granularity of the fitness routine (if a fitness routine is only based on a few values, we probably do not have enoguh information about the problem).

The mating pool is developed through the Darwinian principle of survival of the fittest. In this example we replace the least fit example with the best fit example of the generation. This list becomes the mating pool for offspring development. Parent 1 (test 4) Parent 2 (test 2) 2.3 8.0 6.1 2.3 4.2 6.1 2.0 4.2 8.0 4.3 6.2 6.1 We then randomly select genes (in this case we will select 1 gene) using a uniform probabilty distribution between 1 and L(length of code)-1. Let us assume node #2 was selected. This would give us the crossover fragments:

Parent 1 (test 1)		Parent 2 (test 2)
2.3 -.- 6.1 2.3 4.2 6.1 	2.0 -.- 8.0 4.3 6.2 6.1
Remainders from the parents:
Remainder 1			Remainder 2
-.- 8.0 -.- -.- -.- -.-		-.- 4.2 -.- -.-  -.- -.-
We then can create the children of the parents by distributing the remainder fragments between the crossover fragments. In this case we also use a switch gene position operation.
Offspring produced by crossover
Offspring 1			Offspring 2
2.3 4.2 6.1 2.3 4.2 6.1		2.0 8.0 8.0 4.2 6.2 6.1

	Mating Pool (repro)	After Crossover
				Generation 1
i	M.P.		Fit	C/O Pt.	Code		Fit	
1	2.0 4.2 8.0	54	2	2.0 8.0 8.0	22
	4.3 6.2 6.1			4.3 6.2 6.1

2	2.0 4.2 8.0	54	-	2.0 4.2 8.0	54
	4.2 6.2 6.1			4.2 6.2	6.1

3	6.2 2.1 6.0 	42	-	6.2 2.1 6.0	42
	6.3 4.0 6.1			6.3 4.0 6.1

4	2.3 8.0 6.1	22	2	2.3 4.2 6.1	58	
	2.3 4.2 6.1			2.3 4.2 6.1	

Total			172				178
Worst			22				22
Average			43				44.5
Best			54				58

Through this simple example, we have produced another closed object, and one which became the most fit within the current population.

The image below is representing a similar structure, but the goal becomes more interesting. Could a system which draws lines at random angles learn to draw orthoganol representations of the shapes above?

View the current code



Controlled Coincidences


How can our accidents lead to insights?

What role do mistakes play in developing creative ideas?

This paper attempts to give insight to commonly experienced 'phenomena' within the architectural design process. Design requires a constant manipulation of intentional agencies, these agencies can be the resolution of separate goals between concepts such as contextual vs. historical vs. aesthetic vs. pragmatic. A strong architectural solution is judged on the competence of the designer dealing with these separate agencies. The idea of controlled coincidences relates to the 'unintentional' fulfillment of multiple goals in the development of the design project. The important aspect of satisfying the goal is that it does not involve a conscious constructive act on the part of the design but evolves from within the process, and the transforming relationships of the agencies.

Practitioners of architecture develop technical skills, and the process of learning ill-defined problem solving. Within a design problem there are relatively few limiting parameters and a multitude of potentially viable solutions. The architect needs to compile a mental tool kit of processes, methods, techniques and styles for managing complex problems. This building of the tool kit starts within design school and extends itself within the profession. Previously solved solutions are used only as reference or precedence. The procedural recordings become most important in response to such questions as 'how to proceed at any given stage'? What sorts of frame representations, scripts, and level bands does this require? The problem has to be framed in accordance to specific intentional goals of quality and perhaps vision, or in relationship to a theoretical and conceptual stance. If lower level bands relate to similar levels of detail, and the upper bands more with the functions and goals of the situation, than a certain level of constant cross band checking must also be in operation.

What sorts of hidden assumptions are implemented within a ill-defined problem and how does this lead to 'framing' tasks and goals and more importantly offer insights into the activation of agencies?

Controlled coincidences arise from specific situations of agencies with inherently different goals, but through established polyneme connections and specific scripts for alternating level bands can lead to consistent, new, and unintentional results.

How do these insightful situations intentionally arise?

Communication between agencies with competing goals search for resolution within a sphere of implications (Hofstader 1985), referring to developing the ability of implicitly seeing things which never were. This process, although seemingly sublime, can be explained through overlapping agencies communicating issues at various band fringes. Through the swapping of descriptive goal states and reordering of pronomes 'dangling' ideals are also carried into the new evaluative process.

Suppose each of any specific design goal is an individual agency (aesthetic, structural, historical, etc.). These agencies have the ability to be connected with k-lines to other agencies. The connections can be better informed polynemes, temporary pronomes, or by their relative position within a given state space. The attached diagram shows three separate but potential situations:

01 Two individual but non-connected agencies
02 Agencies connected with a k-line (whether polyneme or pronome)
03 Overlapping agencies with inter-relating parts, but without structured connections.



The importance of 03 is to show that without 'noticeable' connections, recognizers and suppressers can communicate between shared states. Therefore another working agency can influence, steer, or adjust the tasks at hand. This shared space can lead to unintentional satisfaction of goals between agencies depending on the amount of space shared and the interaction within the space.

Polynemes communicate with several agencies simultaneously, the agencies of design (social, structural, economic, formal, etc.) need to resolve the ambiguities between each other, or develop 'new' ways at scripting the tasks at hand. How does consistency with previous experience play between and within these agencies? How do recognizers and suppressers from different agencies deal with the problem? When do they speak up or suppress within the process. While trying to solve a wicked problem, the designer does not usually systematically work and test considerations between agencies on a given problem. Because the parameters of the problem are always changing there is no 'standard' or typical method for approaching design, so we must look at individual designers tool kits and methodologies. But designers do place emphasis on a given 'agency state', the agency state rises to a hierarchically more important role when analyzing with state and script in mind.

Perhaps more intersecting is the result of script writing and goal comparison by individual designers. The strengthening of the polynemes over time in regards to design processes allow for problem management intuitively. The immediate project or task at hand is usually represented by pronomes following the 'grain' or predominant direction of the trained polynemes, (in reference to any number of trans-frames). I will talk in reference to the scripts possibly used within architecture design in my next paper and how the trans-frames representing current design state with design goals can be implemented into the system.


Agency States



How can we model a time based system in relationship to changing frames and shifting our scripts and goals.

In or own thinking we have the ability to keep a number of things in mind but usually there is predominantly a focused attention on a singular object of thought. In my last paper I discussed means of controlling coincidences in respects to creative thoughts. I briefly mentioned overlapping objects which could represent the activation of recognizers and suppressers within other agencies. The interplay between immediate goals with activating recognizers to past problem solving techniques leads to formulate method scripts for problem-solving.

As a problem is being solved it is approached and described by various agencies in very different manners. This is a relevant topic within design because the designer tries to manage solutions with the least amount of resistance from all agencies. The design agencies need to therefore perform cross-checking with separate standards, expectations and criteria.

But how do these different scripts overlap or influence each other.

One way to model the k-line connections is too view the trans-frame as a spatially transformable object. What if this conceptual frame was transformable and respositionable? The diagrams below conceptually represent the object frames in an alternative fashion. The objective is not to represent this model as a definitive process about how the mind works, but to include a potentially computable and relational temporal system. I mentioned before about a variety of goal agencies which can come influence the architectural design process. The types of goals (each of which having separate requirements to try and satisfy) include, but are not limited to:

Experiential EX
Structural ST
Economic E
Social SO
Aesthetic A
Historical H
Spatial SP
Contextual CT
Pragmatic P

Below is a linear arrangement of these goal frames, this list can be viewed in order of operation and importance showing connections between other goals relationally active at that time. Although it also represents various k-line connections it lacks a temporal explanation of how a shift or recomposition of the agencies might occur.



The second diagram is in relation to constantly changing state. The state is in reference to how a problem is being investigated and solved at any given time. The state shows the higher order precedence agencies in a top to bottom relationship. State 01 represents a script for a more practical approach to solving the tasks at hand. Agency-Practical and Agency-Structural inhabit the larger potion of the state space. The transformation of the state involves the updating of the size and place with relationship to time. We view this as a way in which the mind orders itself in changing states involving the overlap of cross-connections between seemingly dissimilar ideas. State 02 represents a transformation from State 01 in which the general thrust of the ideas relates more to Agency-Spatial, Agency-Aesthetical and Agency-Experiential. The scripts used between phase states are quite different, and the scripts used in the transformative stage reflect the general spatial characteristics of the agency states. The diagram between the two states represents the transforming nature of the state. Because any state is always updating itself or transforming this is used as a means of predicting or guiding future states within the system.




What are the consequences and benefits of such a mental mapping technique?

As focused ideas are changing, or coming into existence the size of the agency within the state becomes larger, having a greater possibility to communicate with the other goal agents.

As the goal agents are shuffling positions they also have the ability to interact with one another. This is also dependent on 'velocity' of the moving agents. Two agents with opposing vectors will have less coincidental influence on each other.

The way in which we strategically approach a problem and as the problem develops inherently allows for procedural time based scripts to be developed. The way in which we communicate the goals of a trans-frame with our current situation is still unclear.

One way to imagine the existence of such a system is that any given design state can be steered incrementally towards another design state but not following pre-set paths, but by relational states. The transposional phases also give rise interaction between transforming states in a generally unpredictable fashion.




How I got trapped in the belly.

Working on a project years ago, a professors strongest words to me were to "produce, produce, produce" as a pedagogical technique to explore the posibilites within the design process. I'm a fairly discursive but effecient person and instead of wasting my time drawing out multiple solutions variations and explorations...I just generated all possible solutions.

Types of generational processes:

01 Search the entire solution space.
This although possible becomes astronomically computational intensive. For small 'framed' problems it is a possiblity--framed meaning there exists a finite number of solutions. This type of example is explained in the non-determinant architectual project. The underlying idea behind this project is to generate and pick randomly. Dancing around theoretical lines of generation and randomness. The question results in "Why can't you just generate one solution and accept it rather than generating all and randomly selecting one?" It is a bit more complex than this because the developmental procedures are determined the outcome is not.

02 Determinant Generational
This formal process leads to a rule based result in which there is very little room for random exploration. This example is described by the x-dimensional project. The technological support systems determine the formal representation of the object. The concepts associated with such a system allows for little flexiblity of the rules. The process is very hierarchical. The formal output has very little to do with meaningful, or useful spaces. The rigid process of a system is beneficial in determinant optimization problems in which unknown variables are kept to an absolute minimum.

03 Genetic_01-Growth Decay and Mutation
The example which most explicity represents this model is the Autonomously developing architectural agents model. This concept is a bottom up approach using base formal elements of architecture to develop larger more complex representation. Although it is also rule based, a level of chance and uncertainty is introduced into the model allowing for mutiple generations of a product to be developed through succesive runs.

04 Influential Choosing / Training
The choosing and training technique tries to simultaneous solve two large problems. 01 Solve the problem of have designers have an influence on the generational processs. 02 Allow the system to be trained based upon general 'state' concerns. This type of process involves random generation but the problem becomes framed depending on the specifications of the designer. The search space is a finite space, but a more specific finite space. The problem with such a system is that it tends to be very problem and formal specific. The training mechanisms involved are much to simple to be of any powerful use. The concept of encapsulating current states becomes the most compelling area for future research.

05 Genetic_02
This system currently under exploration, and one in which will be explained in more detail. This system tries to model evolutionary developments by mapping and encapsulating creative

Introduction

How might we begin to define and evaluate creativity, learn how to be more creative as individuals, and in turn, begin to approach creativity from the standpoints of computation? How have psychiatrists, cognitive scientists, and computer science developed their theories and explanations of the role of creativity within their discplinces? We should first begin in the description of creativity. One might describe creativity as the act of creating anything. Others still in the description of something created out of nothing. Aristotle defined the principle of generation of the universe as nous poietikos, the poetic or creative reason. But creativity also requires the skilled and the typically unconscious enlistment of a large number of daily agents and psychological abilities--noticing, remembering, and recognizing. Boden suggest that each of these attributes requires a subtle interpretative processes and complex mental structures.1

Introduction to Various Definitions of Creativity

Creativity is "a many splendored thing," as Guilford points out repeatedly.[2]

Creativity defies precise definition. Perhaps this explains the general problem embedded in the craft and nurturing of creativity. Creativity is seemingly an impossible event according to science, Webster's dictionary defines creativity of "something created from nothing." Another very perplexing and confusing statement--do we have to defy, or at least suspend, all of our scientific beliefs in order to grapple with such concepts? Others have attempted to define this ambiguous process or procedure in the description of an event taking place in itself involving the production of something new, novel, or fruitful in the eyes of civilization. The object which is new and novel, when created, is judged by society. The "object of creation" could also be judged by the individual creator as something novel. Spearman saw creative thinking essentially as a process of seeing or creating relationships, with both conscious and subconscious processes operating. According to one of his principles, when two or more precepts or ideas are given, a person may perceive them to be in various relations (such as near, far, the cause of, the result of, a part of, etc.).[3] Another principle, held by Torrance is that when any item and a relation to it are recognized, the mind can generate in itself another item so related.4 Which can be viewqed at matking metaphopic or simile discoveries. Aspects of Wallas' generalizations on creativity are the basis for most of the systematic and disciplined methods in training existing in cognitive science today. The Wallas process identifies four components to the creative process: preparation, incubation, illumination, and revision.[5]

Various definitions are used to reach a better understanding and interpetation of the word. The methods involved by each group or category in hypothesizing aspects of creativity and interpretation have lead to general theories about the brain. If we examine the main definitions, and classify them, they can be organized into six major groups or classes. Of course many definitions can fit into one or more class.[6]

Class A Gestalt or Perception: This class places a major emphasis upon the recombination of ideas or the restructuring of a "Gestalt."

Class B End product or Innovation: "Creativity is that process which results in a novel work that is accepted as tenable or useful or satisfying by a group at some point in time."[7] Harmon refers to creativity as "any process by which something new is produced--an idea or an object, including a new form or arrangement of old elements."[8]

Class C Aesthetic or Expressive: This category tends to more personally oriented, with a major emphasis on self- expression. It usually includes the role of the `starving artist' who creates art for himself or herself.[9]

Class D Psychoanalytic or Dynamic: This group is primarily defined by certain interactional strength proportions between the id, ego, and superego.

Class E Solution thinking: In this category more emphasis is placed upon the thinking process itself rather than the actual solution of the problem. Guilford defines creativity in terms of a very large number of intellectual factors. The most important of these factors are the discovery factors and the divergent-thinking factors. The discovery factors are defined as the "ability to develop information out of what is given by stimulation." The divergent factors relate to one's ability to go off in different directions when faced with a problem.[10]

Class F Alternative or other: In this category one could find the definition of creativity as "man's subjective relationship with his environment".[11] Or according to Rand the "addition to the existing stored knowledge of mankind."[12]

Boden outlines two different types of creativity, one type is the psychological or P-creative, the other historical or H-creative. Both types deal in novel ideas of creation, but novel in regards to different interpretations.13 P-creative thoughts, concepts or idea are novel to the individual mind which conjured the idea. P-creativity is a thought which is personally generated that has never been thought before, a idea in which the thinker previously had no understanding or connection. P-creative thoughts do not have to be novel in regards to societies interpretation. An idea is H-creative if the idea is truly novel--never been thought of, developed, or constructed before. Most people tend view the H-creative individual to be more creative, in that the idea is completely novel and never thought of before.

Psychological problems and descriptions with creativity

For psychology problems (which might be divergence from the norm) and creativity pertain to questions of normal functioning and pathological problems. Freud was fascinated by the notion of creativity, he himself may represent one of the most creative individuals in developing his discipline (and influencing many others) in the 20th century. In most cases, psychology views creativity functioning on a higher level. How can creativity categorically represent and manifest itself in the literary genius but also the lay-person? Freud wrote little about the creative process, but dedicated much of his time to reading literary classics, as well as studying various creative and artistic individuals such as Leonardo da Vinci and Michelangelo. Within an artistic context, the process of creation pertain to issues about unconscious and conscious motivation. Creative thinking is a form of cognition with special relationships to learning, concept formation, and problem solving.[14] Freud believed that creative thinking and imagination was not limited to the brilliant artists and gifted writers--but exists in various levels in every person. How did Freud really view the process of creation?

Freud wrote an essay entitled "Creative Writers and Daydreaming" and delivered it at a lecture in 1907 to a group of laymen. The essay attempted to develop certain psychological and pathological elements involved with creativity. Freud also gives insight to the understanding that creative objects, in order to be fruitful, must manifest themselves into some type of reality. There is an interesting stream of logic in Freud's description of creativity. He is relating the notion of dreaming, and day dreaming to creative activity. The common person brings to life creative ideas through dreaming which is operating in the unconscious. Freud looks at the creative ability of children to develop his theory on how creative processes. Tis ability, which once existed in all children, transfers to other activities in the adult life. The child, according to Freud, particularly has this aptitude and takes his "playing very seriously." Freud states, if this "pleasurable" act of childhood playing exists what happens to these actions as the individual grows older? Freud suggested transference occurs, posits, and directs itself onto another object or action. The individual grows older and becomes more ashamed of his unfulfilled wishes involving sexual wants and fantasies, the unconscious brings to life these wishes in dreams. Freud continues by adding "a happy person never phantasies (fantasizes), only an unsatisfied one."[15] Can we infer from this logic if happy people do not fantasize then unhappy (or sick) people who "dream" have aspects of creativity? In turn this unhappiness as a generator of "wishful" dreams, being creative, and acting in the unconscious, has the potential to influence and confuse other unconscious thoughts, leading to a variety of neuroses. Freud continues by suggesting that in order for the process of creation to have any value it must, most importantly, reveal or show itself to be "fruitful". The imaginative writer develops his "dreams" into concrete textual forms. Where they offer themselves to be not only potentially beneficial and "fruitful" for society, but also remedial to the existence of the creator. This process, as applied the "imaginative writer", undergoes another transfer which Freud has difficulty in describing. This transfer is an aesthetic and secret one based upon the individual creator. To Freud this is the artists "innermost secret" it is an aesthetic which conceals the "author's" fantasies, drawing forth the reader's deeper psychical sources, described by Freud as incentive-bonus. The process described above in relationship to the propensity of the child to "naturally" play--synonymous with create--gives particular insight to Freud's postmortem analysis on Leonardo da Vinci, giving insight to Leonardo's youthful and playful attitude with objects.

Freud, Jung, and other psychoanalysts emphasize a process occurring outside of consciousness. These ideas of unconscious processes in creativity differ in significant ways. Blanchard's subconscious functions in accord with a system of teleological necessity and does not have the specifically defined or determining effect upon consciousness of the Freudian Unconscious.[16] Jung emphasizes the autonomous complexes which reveal the Collective Unconscious. Similar to the Freudian Unconscious, Jung's Collective Unconscious does not have an effect on the consciousness in creation.[17] Also, the Collective Consciousness considers societies favorable response to a creation, artistic or otherwise. Jung and Freud both viewed the attempts of psychoanalysis to explain creativity in the human condition as relentless. Jung emphatically said, "Any reaction to stimulus may be causally explained; but the creative act, which is the absolute antithesis of mere reaction will forever elude the human understanding."[18] Freud studied the notions of creativity but also strenuously denied that he, or psychoanalysis, could ever penetrate the sources of creativity. Freud reiterated these comments in his paper "Doestoevsky and Parricide" -- "Before the problem of the creative artist analysis must, alas, lay down its arms"[19] The word "creativity" rarely appears in Freud's vocabulary. According to Nelson, there can be no doubt that Freud was as much obsessed by the passion for creativity as he was by the recognition of creative achievement. In 1910 he wrote to Ernst Jones[20]:

I could not contemplate with the sort of comfort a life without work. Creative imagination and work go together with me; I take no delight in anything else. That would be a prescription for happiness were it not for the terrible thoughts that one's productivity depended on sensitive moods. What is one to do on a day when thoughts cease to flow and the proper words won't come? One cannot help trembling at the possibility. That is why, despite the acquiescence in fate that becomes an upright man, I secreted pray: no infirmity, no paralysis of one's powers through bodily distress. 'We'll die with harness on'; as King Macbeth said.

In Freud's accounts fantasy plays a large role in the creation of literary works. Fantasy is primarily a manifestation of preconscious thoughts and feelings--in his general theory the role of the unconscious plays the largest role in creation. In an attempt to explore and develop the psychology of creativity one must look at historical developments within the field of psychology, cognitive sciences, and developments in artificial intelligence to bring to light the advances as well as the pitfalls. Some questions which will be explored are: What is the origin of creative production? Are there certain patterns and conditions which must exist in order to give rise to creativity? What would be involved in a systematic exploration of creative processes? The double problem is to try and develop understanding of creativity through creative explorations. Many artists, architects, writers, and inventors speak of creative flashes, or various states of mind in which inventive and intuitive thoughts arise.

Many descriptions of emergent creativity involve component processes. Wallas describes the process in 4 parts: preparation, incubation, illumination, and revision.[21] The process is initiated by first having the individual sensing a need or deficiency. Once the existence of a problem has been realize, random exploration and clarification of the problem is developed. A specific period of reading, research, discussing, exploring, and analysis of possible solutions takes place where the problem is framed by conventions and other existing processes for solving problems. This process enables the generation of a new idea, illumination, and insight. Part of what makes this procedure possible is the comparison of various solutions to information and resources of knowledge, intuition and other pre-developed creative processes.[22] The concluding step involves a process of experimentation and evaluation. Wallas' incubation stage becomes particularly interesting in the comparison to potential computational processes. The incubation stage is one in which the problem at hand is not consciously thought about. Which is to say, after a potential problem is prepared it is stored in the memory and left "to the operations of the unconscious". Poincaré describes a similar process of incubation in his own process during the writing of Science and Method. He relates a time during no conscious mathematical exploration took place, but which led to two great mathematical discoveries. Both of which developed after a period of "incubation" which was preceded by a phase of "preparation" in which "hard, conscious, systematic, and fruitless analysis of the problem" occurred.23 If this ressembles many creative individuals attempts and methods for deriving creativity, what happens in the stage of incubation, and how can we potentially algorithmically code this dormant incubative process? The compelling question is how can a complex computational process abstain from the "thinking" or processing of a problem after the preparation stage? Is it necessary at all? Computers are staunch followers and interpreters of their own reason and logic--it is difficult to force a computer to apply self-induced restructuring at the machine or program level.

Boden also comments, and is agreement with Freud, on the beneficial aspect of being young in regards to better conceive of a new conceptual space. For a variety of psychological reasons, the young--whether in science or in art--tend to be less inhibited about changing the generative rules currently informing their minds. The very young, according to Boden, are even better--they have an unjaded curiosity, generating all manner of mental adventures challenging the limits of the possible and the very, very young are best of all.24 Creativity whether in adults or children involves exploration and evaluation, but also the ability to "see" and perceive things in paradigmatically new ways--similar to children.

Computational Research in Creativity

Many concepts have been borrowed from cognitive science and psychology and applied to areas of computational research. Two of them would be generative systems, and heuristics. Generative systems work off a rule based idea, for instance the notion of language and grammar, the rules involved in creating certain types of poetry etc. Heuristics can be referred to as commonly accepted and understandable knowledge.

Heuristic Reasoning

Heuristics are useful and helpful in a variety of productive and problem solving activities. It involves a way of thinking about the problem which can lead to new or alternative avenues which haven't been explored. Heuristic reasoning has been around for centuries, Euclid used heuristic reasoning in the development in a variety of his mathematical theorems. Several computers within areas of artificial intelligence use heuristics in successfully solving problems. Most heuristics are pragmatic rules of thumb, or restricted to various domains revealing themselves as `tricks of he trade'. Heuristics can lead to creative ends because the process of reasoning involves simple ways at looking or approaching the problem differently. This sort of reasoning is not always a benefit, especially in computational processing, because in most cases search routines based on heuristic reasoning tend to miss, skip, and disregard problems which do not comply and cannot be coerced into fitting a heuristic parameter.

Connection machine

One could describe a connection machine as several simple autonomous units communicating with each other simultaneously. Connectionist systems are used in psychological, neuroscience, and AI research. Massively parallel processing systems are modeled (in a very basic way) after the physical structure of the brain. Can connectionist ideas lead us to system which can potentially develop and add to theories of human and machine creativity? Potentially. In more abstract terms a connectionist network is a parallel processing system comprised of many simple computational units linked by connections (similar to a brain). The individual units modify other units activities by varying degrees, depending on a simple connectionist weight between 1 and -1. The changes in weights are governed by differential equations and a concept is represented by the system working towards stable activity pattern across the entire network. These networks update and adjust themselves continually in attempt to reach a state leading to the maximum amount of probability of reaching equilibrium. These types of systems are self-regulating, self organizing and information is distributed through the network depending on the excitability or inhibitions of various nodes to `fire' in regards to regulated and recepted information. The system also uses, in accordance with statistical probabilities, nodes which sometimes fire at random--this, in principle, guarantees that the network will learn (eventually) any representation whatsoever.[25]

How do connectionist systems allow us to view the process as potentially creative? These systems operate under very few assumptions, and the distributed system can learn associate patterns without ever being explicitly programmed in respect to those patterns. If a system can potentially pick up patterns, involved in any number of activities or concepts, and associate those patterns with alternate sources, it may lead to insightful developments. Interestingly enough, these approaches to data analysis (which could be multi-dimensional data, even in magnitudes of hundreds) allow such systems to recognize "meaningful" relationships which could be virtually impossible for a human to recognize. Above Boden and Fredu mentioned the role of creativty and the child. In many connectionist systems, as the machine is learning, it behaves in similar ways in which children learn work and conveive of ideas.[26]

Marvin Minsky attempts to hierarchically divide the processes of intelligence into semi-autonomous units in which he calls agents.[27] These agents are specifically designed to do various tasks, some more important than others, some rarely used, but in the end various agents control other agents which send information back and forth in order to eventually (with the build up of enough agents in a complex net) to produce intelligence. Can we really accept this notion, of building up enough complex machinery to bring to life knowledge? What is the current state of the research and what types of creative objects and processes have been created by the computer. There are a variety of mental concepts and processes which have been borrowed from psychology and cognitive science and applied to the notions of computational concepts. Sometimes these approaches lead to interesting and creative results from computer generated objects, ideas, and reasoning.

There are several examples in which the computer is trying to replicate creative acts within the focus of artificial intelligence research. The final evaluation of such a process is to discern and determine whether or not the processes or the outcomes from such processes are creative. Does the system lead to creative reiteration within itself leading to unexpected results? How does the computational procedure know when to stop, that is know when it reaches a creative "product"? Within the computational creativity there are examples in art, literature, music, science, and medicine. I will briefly explain a few of these developments.

Parsing literature was one of the first attempts of AI research. Language was explored and generated `seemingly' creative responses in poetry, story writing, and literature. Early attempts of parsing language involved a database with words which either had certain characteristics and predictable connections, or random connections while it was choosing other words. These attempts, although sometimes very humorous, were interesting initially but the compelling aspect wore off after the system remained at the same level generating predictable representation to various structures. AI research has also investigated various psychological phenomena which are active in most stories and literature. The algorithms for these story generators are coded with representational structures of interaction and behavior mechanisms which can be transformed into a crude outline by process types such as: scripts, what-ifs, plans, MOP's (memory organizational concepts), TOP's (thematic organizational points), TAU's (thematic abstraction units).[28]

Harold Cohen, an artist, analyzed the creative process and the pragmatic procedures he used in creating his own art. He believed if he could cognitively become aware of the methods he was employing to create his art, that he could encode the process as an algorithm and implement the process on a computer. His intention was to communicate his style to the computer. He initially wrote programs which would randomly draw abstract shapes based around relative sizes, positions and relationships. This program developed and grew into one which could generate shapes with a single line. The program has subsequently grown into AARON (the name of the computer, or artist), which is capable of drawing people and plants. AARON will always draw a different picture because the processes involved uses multiple IF-THEN rules in combination with small degrees of randomness. Some of these images are not only visually, and compositionally interesting, but they are drawn and created by a computer. The reason AARON was capable of this is because of a combination of general and specific knowledge. AARON encodes hierarchical information and knowledge about human body parts their relationships, and the affordances and constraints with general body movement. AARON doesn't have a grasp or meaning of the whole picture draw, or exactly what it will look like, it knows to follow specific relationships and statements.

Other types of research have been focused around computer generated music. Various artists are developing tools, and programs which analyze and create music. Stephen Holtzman developed and wrote music programs and application to generate digital music. Holtzman developed a concept (and programming language) title Generative Grammar Definition Language (GGDL). The system is based on a broad set of dynamical rules for the interpretation and generation of music. Johnson-Laird wrote a jazz improvisational program which models human psychology. His program can generate the bass-line for several different jazz routines based upon structure and spontaneity within the structure. The program is based upon, and can only create within, a recognizable artistic style. It is not capable of transforming it's own internal structure. George Stiny studied aesthetic concerns in an attempt to apply algorithmic concerns to computer generated products.[29] These products first were musical in their analysis of aesthetic systems but the research has changed into pattern matching and emergent shape grammars. These emergent rules will hopefully have the ability to examine the current state of a given image and then derive rules from the shapes.

How are these examples relevant to the psychological investigation of creativity? It is interesting to view the computational process as various litmus tests for proving or enabling aspects of psychology or philosophical theories about the brain. If one can algorithmically code instantiations of particular or general human processes, they then in turn, will inherently effect the discipline and understanding in other discourses. We are still left with the concern of how to judge the representations and the products derived through computational structures. Some people will forever disregard the computer as a autonomously creative entity.

I would not suggest that creativity is a by-product of mystic or magical processes occurring in our minds. These processes can potentially be explained with greater insight from psychological, cognitive science research, and artificial intelligence. I would agree with Boden, Taylor, and Freud who suggest that all people have the propensity to be creative and it is not a mutually exclusive `gift' to a select few. Although, it operates in varying degrees between people. I would add, that there exists in creative processes, an important degree and sense of idiosyncratic processes and randomness. I would also argue, our human activity is built, influenced, and rests on the biological substrate: genetic endowment, the structure and functioning of the nervous system, various metabolic and hormonal factors. Do we judge our idiosyncratic processes of creation with our internal structures?

There is another camp of psychiatrists and social workers which try to measure various aspects of creativity. In fact there are multiple tests which try and recognize creativity. It is hard to put faith into such systems which measure certain aspect of mental prowess, intelligence or creativity. There is a balance which has to be watched when trying to rationalize and order such processes, it is above stated that creativity (although common in all of us) must be judged, in then end, by the society or future generations. The act of creating something which is novel is not a particularly difficult process. A common example is to randomly choose a number a words out of a dictionary. This process alone (for the most part) leads to combinations of words which probably have never had a similar relationship, in terms with the other words--anybody can do this, including a computer. It might seem creative by Boden's definition of P-creative and H-creative, in fact it might be a combination of words which have never been concatenated in such a fashion thus allowing the word stream to be labeled as H-creative. In the end the random word stream (though never thought of by anybody before) would have to be judged by the context and society.

Perhaps the most important aspect of my definition for the allowance of creative properties to arise is not only the process one must go through to enable creativity to evolve but the process gives rise to creative understanding. In effect, giving creativity a platform with which to develop. In the above mentioned example, the creation of a random word stream, the series of words might seen novel, but the concept and process by which the word stream developed was not. This does have potential, and in fact anything created (whether creatively or not) has the potential, in the end, to become part of a creative process. For example, if a random word stream is developed, but subsequently follows a process of investigation into the entomology, semantics, and semiotics of the words, various connections and theories between the objects and ideas could arise. Examples are taken from history showing how the words have been used before and their relationships. Or research who have similarly developed semantically possible derivations and similar combinations which inherently carried alternative meanings could evolve into a deeper creative descriptive process. This process in the end would be a formal construction built by existing creative processes.

A creative process is an amalgamation and development of combining other creative processes. It would be seemingly impossible to develop a new creative process which is not based on a set or series of creative processes. The creative process can be a conscious or an unconscious act. It can be developed cognitively and doesn't have to be a product of the black box of consciousness.

There is a difference between a creative act and a creative process, even though a creative act involves a creative process. Creative Processes are developments from creative acts but are at the same time processes which inherently lead to creative products. Creative Processes derive from contextual analysis, arrising from other previously developed creative processes. For example, the process in which Jackson Pollack developed as a catalyst for creating art is a creative process. It is a process because contained within itself are essentially the rules and methods for implementing itself over and over again. This is to say, that anyone can use the particular creative process to generate other forms of creative products. Other examples would include Schoenberg's generation of atonal and multi-tonal music. Essentially Schoenberg analyzed the current condition of music, mastered the existing methods, and derived new methods which were developments from the existing methods. In effect, creating a creative process with which to generate new music. The definition of such creative processes are more complex. If a creative process is used, a creative product is not guaranteed. Just because another artist, or hack, uses Pollack's method in creating a product the result in the end might not be creative, and in regards to this example, it could be far from creative. Another example might include the development of mathematical theorems and formulas, which are in and of themselves processes. The methods used in the generation of such a system is a creative process. But using the theorems, in turn, might not generate creative results. The concept is difficult especially in describing such scenarios to scientists, or computer engineers, who are operating in an objective realm. The concept relies on subjective and aesthetic interpretation of the process, and the reiterative notions of generating creative objects from those processes, and continual generation of novel creative processes from subsequent products and processes.

Future developments, in regards to creative computational structures, will involve isomorphic creative processes. Potentially the software, or even the hardware, will have the ability to recursively generate novel creative processes. A computer has very little difficulties in generating multiple combinations and solutions. In regards to Harold Cohen's program AARON, the algorithm could generate any number of images--all different. What is missing from such potentially creative algorithmic systems is the sense of reflection. The sense is evident in Wallas' four components of creativity. The reflection stage is usually an analysis of the creative process. These creative processes are updated and developed by creative people (artists, scientists, writers, even programmers) throughout the process and in attempts to solve future problems. The goal of artificial intelligence is to develop a creative process which generates creative processes. It is true that spontaneous intelligence will not arise out of a series of circuits, but intelligent and creative like processes can develop. In order for these techniques to be effective, both intelligence and creativity have to build a body of knowledge and use that knowledge, and experience, to evaluate the objects or processes it is creating. Not evaluation based upon a mathematical procedure, but an aesthetic analysis in relationship to its knowledge and creativity. Difficulties with aesthetics are inherent because the structures are based upon deeper meanings and relationships to subjective and experiential attributes. For example I might hold John Cage, as a musician, in very high regards, where others might view his art as a sham. My fascination could be explained that I simply view the process in which Cage proceeded as a creative process generator leading to innovations in novel musical processes. It would also help if I new more about the history and development of music and how Cage was a disciple of Schoenberg. To place Cage in relation, and in comparison to Buddhist philosophy would also be of assistance in order to more fully understand the process rather than a recognition and interpretation of the final musical object. The important aspect, is that although the music and its generative processes are seemingly and often random (which any hack at the piano can play) it must be placed and judged in multiple contexts. If a computer were to generate the same music I would have more difficulty in thinking the music (or the process) as creative. It is similar especially in comparison to modern art and even more so in minimalist art.

What makes a single line drawing (that is drawn without lifting the instrument) of a face, by Picasso, creative or even worth anything? How can Kasimir Malevich get away with painting a black square on a white background, and in turn become the corner stone of modern art? Many issues involving the production of the object must be considered. First, the object must be viewed in regards to the painters personal development. Picasso's line drawing of a simple face which is drawn after painting Guernica embodies the creative processes and knowledge for the manufacturing of both works. The creative processes developed for the creation of Guernica, were building blocks for the production of the creative processes for the line drawing--however simple it may seem. The environment, civilization, and context in which the creative individual is creating is also important. The creative process also must be evaluated in connection with its derivative components, which his to say what other processes and creative processes led to the development of the new and novel creative processes.

Another problem which is left to explain in the evaluation of a creative object is how to judge an object which is contextless. Which is to say if you viewed a painting (without knowing anything about it) how would you judge it? If you liked it could you explain why? Could someone discern the difference between of self interpretive aesthetics and explain why? The problem becomes more compelling when confronted by two identical pictures one created by a David Hockney and one by AARON. Could an expert tell the difference? Probably not, but they still are not the same because there is deeper meaning in such objects based upon a variety of creative processes which can be judged and evaluated according to different concerns. Still part of art is in the appreciation of the object, the image, the text, or the sound without knowing the background of the artist or cultural theories, precedents, or the history of the media. Further, This type of appreciation is primarily governed by unique individual psychological processes, as well as according to Yung, a Collective Unconscious. Jung presents his argument by an analogy with social life--which in turn can be compared to societies aesthetics:[30]

Just as the individual is not only a separate and isolated being, but is part of society, so also the human mind is not an isolated and entirely individual fact but also a collective function. Again, even as certain social functions or tendencies are, so to speak, opposed to the egocentric interests of the individual so also certain functions or tendencies of the human mind are opposed, by their collective nature, to the personal mental functions.

Bibliography

Akin, Omer, Psychology of Architectural Design, Pion Limited, London, 1986.

Boden, Margaret A., The Creative Mind: Myths and Mechanisms, Weidenfeld and Nicholson, London, 1990.

Boden, Margaret A., Dimensions of Creativity, MIT Press, Cambridge, MA, 1994.

Freud, Sigmund, On Creativity and the Unconscious, Harpor and Row, New York, 1958.

Freud, S, "The relations of the Poet to Day-dreaming", 1908.

Freud, S., "The Moses of Michelangelo", 1914

Freud, S., "A mythological Parallel to a visual Obsession", 1916.

Gay, Peter, The Freud Reader, W. W. Norton & Company, New York, 1989.

Hausman, Carl R., and Rothenberg, Albert, The Creativity Question, Nuke University Press, Durham, NC, 1976.

Hofstadter, Douglas R., Godel, Escher, Bach, An Eternal Golden Braid, Vintage Books, New York, 1979.

Marcuse, Herbert, Eros and Civilization, Beacon Press, Boston, 1966.

Nelson, Benjamin, Freud and the 20th Century, World Publishing Company, Ohio, 1965.

Philipson, Morris, Outline of a Jungian Aesthetics, Northwestern University Press, 1963.

Sartre, Jean-Paul, The Psychology of Imagination, Citadel Press Book, New York, 1991.

Spector, Jack, The Aesthetics of Freud, Praegler Publishers, New York, 1973.

Sternberg, Robert, The Nature of Creativity, Cambridge University Press, 1988.

Architecture in Non-Place-Space

The intention of the thesis is to explore how computational design and computer aided design will influence the process of design and the future training of design students.

Typically the computer is metaphorically represented in conventional software applications as a retrofitted pencil--traditional software focuses on the computer as a tool for production rather than as a revolutionary new design instrument. The problem lies in the creative nature of design which lends itself to be an autonomous and unpredictable process from architect to architect, thus making software for design relatively marginal and difficult to produce. In a profit driven consumer industry the development of design software will follow alternative paths. The future development of design software will structure its attention to the creation of an open-system, allowing the consumer/designer to create his/her own application based upon his/her own process of creative envisioning. Therefore, this thesis is advocating, and an experiment within, the role of self designed programs which allow the computer to become an essential design tool. The exploration of the thesis is to use computational design as a pedagogic method tied integrally to the process of thinking.

The process is also an exploration of the use of language as a tool for the realization of complex mental processes. If we as designers can train ourselves to use our hand as an extension of our brain, can we also use language to become the hand that designs for us? Students from the onset of his/her design education should be trained to interact with the computer on a programming level rather than being limited to the conventional structure of existing software--with more of an emphasis on creativity and problem solving and using computational design as a tool to explore and to do the work. The future of 5th and 6th generation languages will allow design software to integrate programming as a way for individuals to customize their packages. The vehicle and the medium to facilitate the process will be the disjuncture from complex programming code to the use basic, recognizable and understandable English.

Testing the Abstract--Project: Architecture in Non-Place-Space

The process includes programming, and writing compendium procedures within existing CAD and animation packages such as Autocad, 3dStudio, and Microstation to take advantage of the commands and power within the software without having to develop the entire graphics based package. The thesis intent is to utilize computational design as a method to explore new options within architectural theory under the parameters defined within non-place-space. Non-place-space can be generated and defined in three different environments: 01 A virtual site located within the computer--without having the limitations, restrictions, and constraints of reality. 02 Various non-perceivable conditions existing within reality--for example non-reflected light waves, radio waves, and transforming conditions in the urban landscape. 03 Architecture of consciousness--inhabitation of the constructed ideas and thoughts situated in the mind.

Designing the Structure

The process begins by exploring how the forces, both internally and externally, effect the transformation of an amorphous urban condition over time. Urban conditions can be described topologically when demarcated by defining their bounding characteristics. These urban conditions can be the mapping of the changing nature of land values, fluctuating crime rates, shifting demographics, the retrofitting of programmatic use in buildings, etc. There are two important properties of these conditions: 01 They exist free from conventional boundaries within the urban setting--they are not determined by the layout of axis' or the traditional grid of the city, and 02 The conditions exist as transforming and amorphous objects through time. The project simulates these urban conditions and their development through the influence of internal forces within the condition and the causation of external forces transforming the object. As a result of describing the nature of a transforming entity, the object is mapped through time as an extrusion so that the stacking of transforming sections would produced a volume of space and time to describe the condition. The next experiment was to examine the intersection of developing urban conditions within the city. These intersections would represent the union of specific internal and external forces over a given section of space and time. These intersections then undergo a transformation to describe the existing temporal change of an object as perceived by humans conventionally--for example witnessing time and space transform immediately whereby past events exist only as a memory trace, and the future exists only as a partially predictable occurrence based upon the present descriptions of the trajectory and acceleration of the transforming object.

The structure for this transformation is to physically manipulate the object based upon its formal characteristics--for example analyzing the shape and structure of an object then transforming the object based upon the orientation and sizes of the faces from the influential variables of the shifting condition--these transformation were developed to represent architectural qualities of planes, volumes, and frames.

Application of the structure on a Urban Virtual Site

Once the structure was developed, the experiment was implemented on a virtual site. The site was created by scanning overhead photographs of urban sites within Los Angeles. The scans were then translated from TIFF images to .GIF bitmap files. The bitmaps were then read as objects, and transformed into a .DXF vector format--a very basic raster to vector translation. The individual site objects (approximated buildings) once vectors, were analyzed by their orientation, adjacencies, shared vertices, and by their size to be interpreted as architectural "building" form by the computer. Through the toggling of these variables a representation of an actualized extruded site is possible--with the objects having the properties of residential, commercial and industrial characteristics.

The transforming topological conditions are then applied and plotted over the three dimensional virtual city--the existence of these temporal conditions over areas in the city produced transformations on the site-objects within its respective conditional zones. Each use (residential, commercial, and industrial) is affected differently by each topological condition and is coded specifically based upon the condition's own transforming development. The coded transformations are formally manipulated through the process of scale, rotation, and translation of the existing site objects vertices, or through a process of dividing the objects base 8 vertice into base 16, or base 64 vertice object--a process of dividing all distances between vertices at the midpoints. These transformations are then translated from triangulated surfaces into two-dimensional polylines and then extruded into space making volumes to represent potential characteristics of space. In the example and drawings provided, there are three changing conditions over a specific area in the virtual city, one condition carries a frame transformation--changing the existing site objects into shifted frames, another condition carries a transparent transformation--transforming the existing site objects into shifted transparent objects, and the third carries a solid transformation--manipulating the objects representing a solid shift.

The final experiment focuses on the effects of the transformations of a smaller portion of the site. These objects undergo another transformation of extrusion, to begin to realize the potential of space forming--whereby sections of the objects can be taken and evaluated based upon the changing characteristics of the transforming conditions topologies.

Conclusion

The process, although as a yet, not founded in specific scientific principles, and simulates arbitrary data, is not meant to be viewed as an architectural solution or an urban solution, but a process of experimenting with computational programming as a design technique. It is an experiment to visualize the transformations not only within one's mind but within the computer. It is a way to show how a designer is imagining manipulating space and using computational design as a vehicle to express this condition. It is a way to explore alternative notions on how we view space, time, and architecture. Computational design will be a pedagogic instrument for students and professionals to explore new realizations within the process of design.

The loss of sensible referents

The Loss of Sendible Referents: Architecture in Non-place-Space



Self generational architectural systems within a computational environment


Table of Contents

Thesis Statement

Thesis Abstract

Chapter 01: Technological Historical Acconts

Chapter 02: Technology and Nothingness

Precedent

Architectual Process

Theory

Application

Program

Setting up structure.

defining resultant terminology

architectural ramifications--design

Bibliography


Time is the accident of accidents-

-Epicurus

The laws of harmony that are internal today will become external tomorrow-

-Kandinsky

You cannot think about thinking, without thinking about thinking about something.

-Seymour Papert

That theory is worthless. It isn't even wrong!

-Wolgand Pauli

It has been the persuasion of an immense majority of human beings that sensibility and thought [as distinguished from matter] are, in their own nature, less susceptible of division and decay, and that, when the body is resolved into its elements, the principle which animated it will remain perpetual and unchanged. However, it it probable that what we call thought is not an actual being, but no more than the relation between certain parts of that infinitely varied mass, of which the rest of the universe is composed, and which ceases to exist as soon as those parts change their position with respect to each other.

-Percy Bysshe Shelley

Introduction:

The development of civilization is motivated by the substantial interaction of various discourses. These interactions allow the borrowing of important thought and developments with individual fields of study to have influence on alternative discourses. Architecture represents this fact by being a profession having roots and invovlement in multiple fields. Architecture as an apperatus of education fundamentally allows the personal exploration of any number of these alternative applieable areas systematically

The importance of theory and process

The future of architecture

The experiments intent is to investigate the possibility of borrowing from the fields of science, mathematics, and technology. The process sets up a hypothesis similar to that of a scientific experiment. The hypothesis revolves around the possibility of removing the conventional product of architecture as a fixed object and replacing it with objectified thought process' modelled by the computer. The purpose is to utilize the computer as an interactive and advantagous design instrument allowing it to become a necassary extension of the thought process and the architectural method. The thesis is to explore architectural theory using the tools and modeling methods of science and technology.

The investigation will try to define and explore an architecture in non-place-space[1]; an experimental field where the interactions of architectural thought are explored, described and created. An attempt is made to examine the significance of architectural thoughts as interrelational multi-dimensional processes[2], which most commonly results in built form, but the necassary product of architecture does not have to reside in the built form. Architecture must remove the stigma of a singularly motivated product and replace it with a generational process producing multiple justifiable conditions derived from the structure of procedural thought.[3] The exploration is to understand the potential of having an architecture which is temporal, unpredictable, transformable, and mutable as its condition of existence. In essense it is a process which very closely relates to the condition of built architecture--being occupied, changed, renovated, and contextually challenged--but having these potentially uncontrollable circumstances as resultant factors influencing the system. The purpose of this exploration is to visualize the potential of manifesting thought and creativity by adhereing to an objective scientific model. The model will then be applied and tested systematically to architectural objects at multiple scales.[4] Theorectically to forecast the possiblity of using computer methodologies within the design process so that the produced architecture form more closely resembles the process of thinking: transforming, manipulating, and not seeing architecture as a frozen completed object. The computer will act as the systematic generator and imformation organizer. Utilizing the computer as an essential design tool will allow new paths within architecture to be developed.

Abstract--Technological/Historical Account

We are witnessing a change, a shift of paradigms and one of fantastic unfolding challenges. The world is involved in changing medias, mediums, and conventional definitions--essentially a world challenged by electric/electronic technologies. With the advent of mass quantities of personal computers, televisions, telephones--reaching into everyone life--we challenge our personal and conditioned perceptions. We are in effect dealing in paradoxical spatial relationships--where there is no here or there--we can, unbound by distance, be everywhere. We enter constructed space, a space unconfined, and unchallenged- literally moving beyond Euclidean geometries into technological 'space-time'. We have the ability not only to push the architectural envelope-but in effect to fold, break and turn it inside out. We can exist virtually without day or night- weather- or gravity; but no with new constraints of the electronic-day- a day which has the potential to expose itself instantaneously, speed distance will replace physical dimension.

If architectonics once measured itself according to geology, according to the tectonics of natural reliefs, with pyramids, towers and other neo-gothic tricks, today it measures itself according to state-of-art technologies, whose vertiginous prowness exiles all of us from the terrestrial horizon.

Jean Piere Chanageux[5]

The exiting convention of reality--basing itself truly in perceptions and physical beliefs--now falls to the wayside, giving rise to a multitude of alternative realities- existing in other forms of appearances with the absence of shape, dimension, and meaning.

Precision is the relationship of measured value to the value of its uncertainty. One could say that precision is its inverse: relative uncertainty.

Patrick Bouchareine

The epistemological movement of the solar day to the chemical day to the electric day to the electronic day: everlasting.[6] The materiality of mental images can no longer be doubted. This perhaps has the potential to become real- or postreal- for the belief of mental images that can recorded and seen visually on a screen doesn't appear to be to far fetched- and actually obtainable.

Is it possible to intellectualize architectural thinking? Which is to say that the notion of architectural training instead of being primarily visual is essentially, albeit a personal, yet self constructed method for thinking. The abstract notion of communicating rhetoric between architects is accepted but not defined- we morphologically generalize terms describing planes, structure, systems, site, precedent, aesthetics, truth, language, etc. They are in essence- notational constructs- for there exists no precise definitions only implied understanding. The history of architectural realization allows for adaptation and flexibility. If physics and mathematics operate in a world of determinant precision, and relative structure, what happened with architecture. Certainly architecture, revolving around artistic principles which most of the time is lost from drawing to building, hastens relationship to science and mathematics. Where was the disjuncture? The initial division is understandable- based upon anti-physically unrepresentable or non conventionally describable paradigm shifts in mathematics and science. Architecture is being forced, cornered, and palatable be unseen forces of great economic, political, and social structures. In effect it is at the whim of convention- for in itself architecture remains luxurious, unnecessary, trivial, and trapped. This thesis therefore, does not advocate the liberation or exoneration of architecture, but by investigating the possibility of grafting architectural thought and method with mathematical principles, and scientific theory within a technological framework- allows architecture to explore itself unbound by perceptible conventions.

What potential is there in non-perceptible space? It is in the investigation of a constructed scientific model--matrices modeling, Heisenberg's uncertainty principle, and notions of space--which produce this non-perceptible concept. Initially 'primary' energy acts (structure), influence and force theoretical objects placed into 'constructed space' (site) which in turn define the boundaries of that 'space' by their interaction (program), this methodology produces an architectural thought model. Forces, and the breakdown of energies within the system (entropy) will produce perpetual change within the system, creating continual action, movement, adjacencies, relationships and possibilities.

The vanishing point, the horizon line

The vanishing point is the anchor of a system which incarnates the viewer, renders him tangible and corporeal, a measurable, and above all a visible object in a world of absolute visibility.[7]

Perspectival construction is a system for transcribing how we perceive the world through our eyes. The vanishing point in art and architecture was an important part of the manufacturing a simulation of the world into two dimensions. Brunelleschi, integral in this development, conducts an experiment in 1425, while painting a perspective of the front of the Baptistry, he turns the painting around so it faces the front of the Baptistry, and places a mirror in front of the painting, from the backside of the canvas (now looking towards the Baptistry) he pierces the canvas and gazes through the hole, and sees the mirror reflecting the painting and the Baptistery at the same time--both represented in two dimensions and proved his constructive technique in perspectival for they are equivalent.[8] The importance of perspective method allows the transferring of information from reality to a tangible representation of reality. Essentially the vanishing point is the point at which our focus is located. The vanishing point represents a fixed and perceptible view of the world. It represents a definite location within a real and physical scene. The importance of understanding the nature of the concept is that from where we look depicts a point zero, and the vanishing point is bound by the limitation of our sight and is thus a concept terminating our sensorial range. The vanishing point represents a point of view which if mirrored would reflect the opposing view of the world as seen by the viewer. The vanishing point is the an anti point of the viewing point.

Sense and Perception--Speed Distance

Our perceptions are inherently rooted into our lack of understanding of the concept of Nothing. We can only mentally construct the representation of Nothing. To imagine a condition of Nothingness we must remove ourselves as time bound material objects and appoint concepts hierarchically above the object. Perhaps we can image space without anything in it, but only in a sense of visually accepting Nothing, for in the confines of our universe we do not perceive the presence of energy--or the emission of waves. On earth our environment enfolds before us as perceived through our senses. Senses are channels of sensations--to sense something is to detect something--it is not probable to detect 'Nothing'. Logically our senses prevent us from assigning a real value to Nothing. What happens to our perceptions once technology moves, and continuously advances, forward? We have within this century alone gained access and the ability to physically move, and mentally move, unbound in our world and universe. It is at this departure point at which perceptions can become confused. The contemporary manifestations of transportation and telecommunication technologies result in the profligation of Nothingness. It is from here on the concept of the conventional vanishing point is departed. We now have the potential to experiences multiple vanishing points, where pespectival reduction is no longer evident or in case relevant. Fundamentally we are preparing to occupy all space between the conventional static point of viewing and the horizon line. It is this concept of existing nowhere which is becoming part of our contemporary being. It is also where the assigning a value to space becomes confused.

Perspective construction is how we visually translate our world to two-dimension representation, but the alternative defintion for perspective is the ability to see relevant data in a meaningful relationship. With new development in technology, telecommunications, and telescience we witness a changing paradigm--one which presupposes the breakdown of conditional and conventional perceptual systems. These new technologies allow the dismantling of distance with the intrinsically rooted concept of speed. Our perceptions regarding the physical movement from place to place are no longer applicable when we are connected to 'place' by telephone, or roaming within the labyrinth of information through computer technologies. The highest reaches of this notion would be perhaps existing physically in one place, but experiencing every place--being nowhere. These concepts allow the breakdown of place-hood perception--for now the physical space which our body inhabits, our mind and senses do not. Basically this exploits the very notion of the conflict between perception and Nothingness. This movement which seemingly developed from the notion of assigned worth is a dilemma of our time.

Sartre develops an explicit explanation of Nothing, he describes, if an object is to be posited as absent or not existing, then there must be involved the ability to constitute an emptiness or Nothingness with respect to it.[9] Sartre goes on further than this and says that every act of imagination there is a double nihilation. In this connection he makes an important distinction between being-in-the-world, and being-in-the-midst-of-the-world. To be in-the-midst-of-the-world is to be one with the world as in the case of objects. But consciousness is not in-the-midst-of-the-world; it is in-the-world. This means that consciousness is inevitably involved in the world (both because we have bodies and because by definition consciousness is consciousness of a transcendent object) but that there is a separation between consciousness and the things in the world.[10] For consciousness in its primary form, as we saw earlier, is a non-position self-consciousness; hence if consciousness is consciousness of an object, it is consciousness of not being the object. There is in short, a power of withdrawal in consciousness such that it can nihilate (encase with a region on non-being) the objects of which it is conscious. Imagination requires two of these nihilating acts. When we imagine, we posit a world in which an object is not present in order that we may imagine a world in which our imagined object is present. I do not imagine a tree so long as I am looking at one. To accomplish this imagining act, we must first be able to posit the world of synthetic totality. This is possible only for a consciousness capable of effecting a nihilating withdrawal from the world. Then we posit the imagined object as existing somehow apart from the world, thus denying it as being part of the existing world.

It is in this notion of positing imagination, and the property of being within-the-world, which will be described as we approach a technological media epoch. For in this 'media' environment we essentially occupy space inbetween being in-the-world and being in-the-midst-of-the-world. As for Nothingness, this would derive its origin from negative judgments; it would be a concept establishing the transcendent unity of all these judgments, a propositional function of the type,"X is not."[11] This negative judgment established itself as soon as reality and human perspective in the conventional perceptually representable sense is abandoned for a simulated virtual one.

Precedent analysis

This investigation of precedent leads a wandering path throughout discourses but all adhering to an initial thesis idea. One a creating a placeless architecture, reduced thought- secondariness of form to process, descriptions of space, of processes of representation in space- practical applications derived from math and science and process.

01 Fin d'Ou T Hou S by Peter Eisenman

02 Buckminster Fuller conceptual connections between science/math/architecture.

03 Space as defined by modern physicists.

04 x-dimensional architecture[12]

05 Heisenberg's Uncertainty Principle

06 Hyper Graphic n-dimensional manipulations.

07 A brief discussion on the concept of Nothingness.

Fin d'Ou T Hou S

What can be the model for architecture when the essence of what was effective was in the classical model--the presumed rational value of structures, representations, methodologies of origins and ends and deductive processes--have been shown to be delusory?[13]

The Fin d'Ou T Hou S by Peter Eisenman is an exploration in architecture which moves beyond the limitations presented by the classical model to the realization of architecture as an independent discourse, free from external values; that is, the intersection of the meaningful, the arbitrary and the timeless in the artificial.

Traditionally, the architectural object was assigned a value based upon strength and visibility of its connections to a set of programmatic requirements including function, structure, meaning and aesthetics. Judgment of value on the basis of extrinsic criteria was perceived and defined as rational. Non-conformity in the context marked value-less architecture.

The first premise of the Fin d'Ou T Hou S is that the world can no longer be understood in relation to any 'absolute' frame of reference devised by man. If one accepts this presupposition then the concept of extrinsic or relative value becomes meaningless and traditional rationalism merely arbitrary. Fin d'Ou T Hou S suggests the architectural object must become internalized so that its value lies in its own processes. Those programmatic requirements which had previously been seen as the causes must now become the effects of architecture. Fin d'Ou T Hou S is not rational architecture in the traditional sense. It proposes an intrinsic value system which is alternative to a context of arbitrariness; it is true to its own logic. Faced with an object that admits no discursive element external to its own processes, our customary role as subject is futile, and we are bereft of our habitual modes of understanding and appraising architecture. Eisenman suggests that Fin d'Ou T Hou S requires a new reader, willing to suspend previous modes of deciphering for an attitude of receptive investigation.

While Fin d'Ou T Hou S claims to be self-definitive, it does claim to be self-explanatory. The process records its own history at every point in its development, but at any one of the steps shown, including the last, is no more than artificial representation of a single frame from a seamless continuity which would be self-explanatory if it could be recreated. This in term becomes a departure point for the thesis as representation of spatial-temporal physical development- actually in real time regenerating changing... alive. Traditionally, the necessity of a score or a text devalued the architectural project. Here Fin d'Ou T Hou S is presented as a score of its process and an explanation of the analysis and processes discovered in the initial configuration. This presentation is consistent with the devaluation of object in favor of process. The process consists of multiple stages of development. What is important for the analysis, and justly a thesis is the condition of an everchanging system.

Object States:

A constructed system which, in effect, will develop autonomously. The system basis is the process of decomposition, or rather an approximation of decomposition. Inherently from the initial stage of intervened action, are the forms of two el shapes. There is an initial dichotomy set up by the relationships of the parts- in effect preconceiving conflict on behalf of the initial phase state. One is a present solid el, and the other is one half the others size and is present void. The process of decomposition happens when the smaller el moves closer to the larger el. This in effect is setting the stage for interaction between the two forms.

From the onstart a determined structure was designed to reflect the interactions between developed pieces of the initial equilibrium system. This structure is the rules of interaction. As a volume enters another volume the secondary volume will be displaced, and change.

The initial postulations are as follows:[14]

Active Passive Result

Presence over Presence creates Absence

Absence over Absence creates Presence

Presence over Absence creates Presence

Absence over Presence creates Absence

Void over Void creates Solid

Solid over Solid creates Void

Void over Solid creates Solid

Solid over Void creates Void

Notation and Trace:

Notations of Presence:

Physical representation of notation only appear on the surface of form. These frozen states represent transitional departures for with which to view the process.[15]

Present Solid : Opaque

Present Void : Transparent Color

Absent Solid : Translucent Plane Grids

Absent Void : Translucent Line Grids

Notations of Trace:

As the form moves it leaves a trace of its previous position, a notation of the previous state will appear on the surface of the form.

Present Solid : Void Line Grid

Present Void : Solid Line Grid

Absent Solid : Solid Plane Grid

Absent Void : Solid Line Grid (Void Plane Grid)

Fin d'Ou T Hou S represents a point of departure of formal and analytic structure from which to follow. The relative aspects are to present a rationally logical structure--in the description of empirical, arbitrary, and relevant argument--with which the project develops. It is the relationship of the author to the object, a subjected of fatherless development, resulting in controlled genetic development rather than careful nurturing of the object into its primacy. Thus the process divorces the architect from the object, as seen only as coded determination. Logic and rational play a large role in the analysis, for by reducing the interactions/objects/formal existence's one can legitimately predict the possible representation outcomes but not the formal outcomes. It is in the objective persistence of viewing which allows Eisenman to continue his experiment. It is a game of which the outcome is not predictable, and definitely not preconceived. For in the preconception we limit the possibilities of radical development. The representation of the diagrams a frozen objects is misleading for in essence the object is in a continual state of developmental disarray- it is here where the departure from modernism takes place; the account of the existence of multiple, and multiply correct from. Noting that function is but one possibility of form, Eisenman argued that all such possibilities cannot be known a priori or discovered empirically. Architecture in its essence cannot break beyond the compositional, subjectiveness of itself unless it breaks the bonds and these connections between author and object be removed as much as possible.

[The Moderns proposed to extract architecture from history by identifying its essential, therefore a priori, purpose. They selected one aspect of function, use, to elevate to an a priori principle of architecture. It is obvious now that the actual function of architecture is far more complex than the efficient use of building, but even within then bounds of their own postulate, the possible uses of form that they considered to be self-evident were known only to them through a tradition a history and a use. Therefore form cannot follow function until function has emerged as a possibility of form. Even the possibility of utility cannot be known empirically][16]

The process of weaning oneself from subjectiveness is relatively difficult, it is a procedurally cyclically infinite argument- the more one removes him or herself from authorship, the greater the struggle is to take the initial footstep. It is possible to in effect work within a range of subjectivity's and objectivity's, sliding the balance between the fuzzy zone, never reaching either end and always finding relative traces to the inescapable opposite.

Hyperobjects

The possibility of displaying n-dimensional hyperobjects by computer.

Edwin Abbott is his book Flatland describes a world restricted to two dimensions, one dimensions, and describes the social order and the attitude about space within each. The inhabitants of both worlds were completely unable to visualize a third dimension and were baffled by the weird contortions. Man finds himself in a similar predicament when he begins to describe the properties, and in particular the properties of objects within spatial dimensions of higher then three. Many alternative investigations lead to principles of justifying relationships within greater than three dimensions. One case in particular which points to the ability to use projected geometries to translate the positions of points or objects in higher dimensions to perceptible three dimensional objects. This method is very similar to the perspectival methods used today in transcribing a three dimensional world to two dimension images/ representations of the object in three.

Projective geometries can be used on any number of dimensions so that an n-dimensional hyperobject can be mathematically projected into an (n-1) dimensional space. Such projection could be applied repetitively until finally a dimension which can be discernible by our perceptions. Motion of a hyperobject can also be charted, the most basic type of movement would be rotation of a hyperobject in n-dimensional space.[17]

Click here for Picture [18]

Space, Space-Time

Buckminster Fuller

526.00 Space

526.01 There is no universal space or static space in Universe. The word space is conceptually meaningless except in reference to intervals between high-frequency events momentarily "constellar" in specific local systems. There is no shape of Universe. There is only omnidirectional, non conceptual "out" and the specifically directional, conceptual "in." We have time relationships but not static-space relationships.

526.02 Time and space are simply functions of velocity. You can examine the time increment or the space increment separately, but they are never independent of each other.

526.03 Space is absence of events, metaphysically. Space is the absence of energy events, physically.

526.04 The atmosphere's molecules over any place on Earth's surface are forever shifting position. The air over the Himalayas is enveloping California a week later. The stars now overhead are underfoot twelve hours later. The stars themselves are swiftly moving with respect to one another. Many of them have not been where you see them for millions of years; many burnt out long ago. The Sun's light takes eight minutes to reach us. We have relationships- but not space.

526.05 You cannot get out the Universe. You are always in Universe.[19]

From this brief text on space, it is reasonable to assume that the investigations which Buckminster Fuller undertook were very scientific oriented. Fuller also, in his book synegetics, grapples with the forces of math and science to, in his view, discover the cosmic relationships of form, force, and human ideology. The framework which is investigated most is the nature of the complexities of geometry giving rise to a structured formal architecture. Fuller describes synergy as meaning the behavior of whole systems unpredicted by the behavior of their parts taken separately. The words synergy (syn-ergy) and energy (en-ergy) are companions. Energy studies are familiar. Energy relates to differentiating out sub functions of nature, studying objects isolated out of the whole complex of Universe--for instance, studying soil minerals without consideration of hydraulics or of plant genetics. But synergy represents the integrated behaviors instead of all differentiated behaviors of nature's galaxy of systems and galaxy of galaxies.[20]

It is from this departure point that Fuller analyzes the beauty of a scientific and artistic model, which to him was were invention resided. It is this notion of a comprehensive background and research within the scientific discourse which gave rise to Fuller's profound and elaborate concepts.

Project 402c Computer Generated Architecture

This project was a topic studio, taken within the computer studio and was an experiment in form, process, and program. The project was develop a space analog station in Antarctica, a complex to house 16 scientist year-round with enough resources and storage for their scientific experiments.

Concept: Volume only exists as mandated by the technological systems concerns.

Permanent structures may well underlie all modes of communication but that of a serial technique (technique rather than thought- a technique that may imply a vision of the world, without being itself a philosophy) is the construction of new structured realities and not the discovery of eternal structural principles.

Umberto Eco[21]

What begins is our ability to comprehend that on the contrary change ought to be very controlled. In using tables in general or a series of tables, I believe one can arrive at direct form-- is what interests everyone unfortunately-- it is wherever you are, and there is no place where it isn't- highest truth that is. Eventually everything will be happening at once nothing behind the screen, unless the screen happens to be in front. All that is necessary is an empty space of time and let it act in its magnetic way eventually there will be so much in it that whistles in order to apply to all these various characteristics he necessarily reduces it to numbers he has also gone the mathematical way of making a correspondence between roles.

John Cage[22]

Theory

Experimental form in a space analog station- the form moves past predictable convention- displacing tradition, reality, and gravity.

Antarctica -- Abstract and conceptual, a place for experimentation- the explorers who reached the barren plateau were moving towards the future; discovering. Today the research in Antarctica is following a similar path but pushing the frontier to the simulation of a space analog station. This experimentation leads us to believe that the die-hard notion of exploration is still very present in human cognition. This conceptual lesson we can learn from and (re)apply when investigating the architectural possibilities for Antarctica, space and the future of the profession.

Architecture, while continually on an experimental journey, searches to enter the 21st century- this century will be defined by information, communication and networking- all governed by advancements in technology. As architects we must not only understand this paradigm shift but define, explore and create it.

The project is an experiment attempting to define an x-dimensional architecture. An architecture which can truly be derived from- and responsive to evolving and fluctuating fields of 'Speed',culture and information. This architecture loaded with information bypasses the conventional form making methods of sketching, drawing, and self-righteous biased determinacy. This x-dimensional architecture is rooted, created and reduced to instantaneously transforming numeric data information. The information is based upon and reflects the fluctuating loads and demands on the essential components of the occupiable building: the systems. The attempt is to analyze the systems demands on individual programmatic elements- thus consumption and waste are reduced and efficiency is optimized. The transformable form holds promise in not only mimicking this 'information shift' but implementing and allowing this hyperreal interactive situation to occur.

We notice that in a society which is fragmented, and completely changing, constantly in flux that Marx's allegorical modernist aphorism-'fast frozen and fixed relationships' are destroyed depicts reality. This reference holds true today but we can capture this transformation formally. Especially with the advent of networks having non boundary and barrier breaking connections which, in the near future will connect everyone. These interspactially woven relationships are continually changing. By investigating the possibilities of utilizing computers in design we place ourselves at the threshold- defining the next steps toward the future of technological and human development.

Process

Instead of designing the with classical methods of design--for example composition, symmetry, aesthetics--the experiment investigates the possibility for encoding data- and transforming it into form. This is done by writing a program in AutoLisp. The importance of this process is to show how data, however changed once entered, can produce unexpected and unpredicted form. The program has the capability to run multiple series of data entry to produce a wide variety of possibilities.

01 The program enters an intensive investigation of technological systems concerns. These concerns are evaluated from derived equations and used as form influencing data.

This experiment examines each programmatic element and its need/ supply/ or demand on the systems of the Air Conditioning System (ACS), Re-supply System (RS), Waste Management (WM), Fire Dampening System (FDDS), Electrical Power Supply (EPS), Data Management System (DMS), Communication System (CS), and the Water Supply System (WSS). Standard, invented, and intuitive equations were derived for creating a 'unit-less' database, in which to configure percentages and allocations for multiple simulations of load. For example the loads would substantially change from day to night- or similarly summer to winter.

02 A computer program is written (Tech15.lsp surface models, and Tech25.lsp solid models)[23] which (re)analysis the data, and plots the data in space- this data is represented in space as folded 2-dimensional planes in three-dimensional space- each graphing an individual program space for a particular instant in time. This is done through circular graphing, radiating from a center point which is determined by highest load capacities, alongside highest importance data.

03 The program is analyzed 'personally' and relationships between programmatic spaces are determined. These relationships are the psychological human concerns between spaces. These are constant and do not change in any manipulation. The relationships are broken into 3 types:

01 Primary- Solid Relationships

02 Secondary-Transparent Relationships

03 Tertiary- Frame Relationships

The relationships between the program spaces are fixed. The primary (solid) relationships connect the quarters to hygiene, kitchen to dining, and quarters to the workspace. The secondary (transparent) relationships connect the quarters to the recreation room, office to communication control, recreation to biosphere, and general storage to vehicular storage. The tertiary relationships (frame) connect the hygiene to kitchen, chemistry lab to workspace, work to laser lab, and waste storage with fuel storage. The indeterminate chance interconnection between crossing relationships verifies the ability for unpredictable change.

05 The form once outputted is then analyzed and examined- horizontal, vertical cross, and vertical transverse section and then made through the form. This type of analysis begins to describe the overall volume of the form, and also shows the potential for a structural wrapping around the shell.

The data which can, and essentially is constantly changing is based upon the continually updated loads of the buildings systems. This representative flux is produced visually, as active engaging architecture whose form is only a by-product of the state of the given information at anytime.

Specialized Studies

Program Analysis:

How is non-place-created created?

Program:

The theory of programming in architectural terms is to develop a information processing system. It is within this system where judgments are made, hierarchies established and design decisions made. Working from the investigations of precedents, the thesis is finally establishing various connections--and starting to focus and narrow its considerations and exploration. The thesis is now reducing the process to a dynamic modeling system from the creation of architectural 'thought' and evaluated by its performance of its conceptual process.

The basis and product for the programmatic requirements inherently lies in the ability to make qualified judgments. The purpose of this programmatic evaluation is to set up the structural models for the development of the experiment. A software program is written to actually begin to develop program. In this thesis, the resultant developments will be in the realm of transient, transformable, and unpredictable ramifications. The program will begin to set up the structure of the possibilities of the interactions.

Essentially the program will be derived from the physical interactions found in nature. It is here where architectural methodology is applied to the investigation. To explain it 'simply' and to escape the rhetoric of cyclical argument--the program sets up how particles will interact with other particles in order to create form through their involvement of from the tracing of the resultant movements. These fundamental ideas derive from the principles involved in physics, especially gravitational attraction.

The process will be to first generate a two dimensional model that will show the interactions of autonomous particles. Particles represent the smallest physical representation of architectural form. The particles will have specific variables: mass, charge, density, and position. The particles have the capability to interact with each other either being 'pushed away' or attracted. Particles also have the capacity of being connected based upon their relative position, this will be signified as a line. A line is the product of two points, architectural the first form giving development. These lines will move and change based about the relative positions of the moving connected particles, they also have the potential to split based upon their inherent mass/density/distance relationships. When three or more particles connect simultaneously their resultant form will produce triangulated, or orthogonal forms, it is here where planes can be established.

The site, or the field of operation, where the interactions take place can either be bounded by a physical source, such as the dimension of the screen, or be unbound and allow the interactions to take place infinitely. While operating in various dimensions space in the first dimension will be bounded by two fixed points and the interactions can take place between the points, and resultant extremity interaction will force the particles back into the 'field of operation'; in a two dimensional universe the field is bound by a plane, and the interactions will be forced within; in a three dimensional universe the volume is bounded by a volume, for instance a cube, or sphere, but it could also be bound by contextual issues such as building positions, subsequently allowing rebound interactions. In fourth dimensional considerations the universe would be bounded by a quasi three dimensional object or an inherently four-d quadrant system..etc.

In a potential boundless universe the interactions could inherently be controlled by entropy, or the systematic breakdown of the particles energies, into smaller constituent portions of unusable energy. This could also be controlled timing devices tuned for possibilities of non-interaction allowing a meta-command to take over and randomly overdrive a particles vector/position.

The potential of architectural interpretation lays in the process of programming or structuring the potential opportunities, obviously all interactions cannot be predicted immediately, but reducing the form making instruments into fundamental architectural concepts is important. The model can be seen in many different positions: 01 the temporality of the form created by its interactions is the result; 02 the tracing of the fundamental, basic, particles and its subsequent interactions are the result; 03 the tracing of the hybrid resultants of the interactions-such as the tracing of a line into a plane, or the tracing of a 4 particle non planer object through space creating volume. The trace of the particles motion can also be interpreted into architectural form, fundamentally it is completely contextual in a reductionist sense. Allowing the path to be created by the interaction of the environment.

Terminology:

Particles

The particle is the basic element which forces are applied, particles have initial positions, velocities, mass, and density.

2object

The 2object is the connection of two particles creating a line.

3object

The 3object is the connection of three particles create a triangulated surface with the potential of being solid, transparent or negative depending on the combination of the particles.

4+object

The 4+object is the connection of 4+ particles creating a plane, this plane, if particles are non planar create a curved surface. The 4+object has the potential of embodying other 'architectural' material--form making--characteristics.

Mass

Theoretical mass of object--which is a constant which might not be applied to the system. But helps for the modeling of curvilinear movement.

Density

In kilograms per cubic meter--which is a constant which might not be applied to the system. But helps for the modeling of curvilinear movement.

Position

This will be determined by the dimensional constant; potential dimensions:

+/-0 dimension: begins to investigate the potential of operating with alternative constants and variables without direct movement but with the exchange of electrons/or particles between objects.

1-dimension: This dimension is bounded by a line, and fundamentally with two end points--movement of particles would be along the line. Position would be determined by location measured in x.

2-dimension: This dimension is bounded by a plane, where interactions are flat--allowing for the potential of a high probability of 'crashes' between particles. Position would be determined by location measured in x,y..

3-dimension: This dimension limited by three vectorial position variable; x,y, and z.

Velocity

In meters/second

V=v+at

Acceleration

a=F/m

Motion

X=x+vt

Time

Time Interval: This is the length of time that a gravitational force acts on objects in seconds, or in other words, the time elapsed between calculations for new velocity and position. The

particle motions are calculated at discrete intervals, the particles when positioned or plotted within the computer aren't moving in curves, but in a lot of tiny straight lines. The time interval relates to the length of those lines. Now in real space those lines are really 0 in length,

Total energy

Total Energy: Particles in higher orbits have more total energy than particles in lower orbits, even though the lower orbiting ones have a higher speed. This is how spacecraft are able to "slingshot" out of a system. They steal energy from an orbiting body by throwing it into a lower orbit.

Momentum

Total Momentum: To calculate the total momentum of a system, multiply the mass of each object with its speed. Then sum up the results: M1xS1 + M2xS2 + M3xS3 + .... If the total momentum is not

zero, the whole system will "drift" through space. The positions of the planets only come into play when calculating total angular momentum.

Newton's second law

F = ma,

Newton = kg m / sec^2

Gravitational Constant

Gravitational Constant: This is the strength of the

gravitational field induced by a mass. You should not need to

change this value, but it is available for versatility. This

value affects the units used for mass, density, distance, and

velocity. The value found in nature is +6.67E-11 N(m^2)/(Kg^2).

G = N m^2 / kg^2

= (kg m / sec^2) m^2 / kg^2

= kg m^3 / sec^2 kg^2

= m^3 / sec^2 kg

Considerations

Range of interaction

Number of objects in the environment

Combinatorial and subtractive connections

The conversion factor, therefore, between the SI gravitational

constant and its equivalent in our units is:

K = AU^3 / YEAR^2 M_SUN

Programmatic Outline for Procedural Studio Scientific Investigation

Specialized Studies

This section will deal with certain explorations, concepts and degfinitions which have motivated the thesis. They are explanations of scientific phenomena and rooted within their computational designs are important relationships between parts, sub-parts, and wholes. There is an explanation of Heisenberg's uncretainty principle, natural-physcal causal forces within our environment and understandings of how the brain works.

Uncertainty Principle-

The importance of the uncertainty principle is the relative unpredictable aspect of the universe's most fundamental particles. It is used as more of a definition to the pseudo determinant ideologies of truth. The questions raised regarding the uncertainty principle are very relevant to the seemingly visible and noticeable events which occur in the universe. Is it possible to predict future events if enough specialized information is taken into account? For some events sure, we on one hand know the sun will rise tomorrow, that Haley's comet will return in 67 years. But what about events such as hurricanes, earthquakes, or winning the lottery?

In order to predict the future position and velocity of a particle, one has to be able to measure its present position and velocity accurately. The obvious way to do this is to shine light on the particle. Some of the waves of light will be scattered by the particle and this will indicate its position. However, one will not be able to determine the position of the particle more accurately than the distance between the wave crests of light; so one needs to use light of a shorter wavelength in order to measure the position of the particle precisely. Now, by Planck's quantum hypothesis, one cannot use an arbitrary small amount of light; one has to use at least on quantum. This quantum will disturb the particle and change its velocity in a way which cannot be predicted.[24] Moreover, the more accurately one measures the position, the shorter the wavelength of the light that one needs and hence the higher the energy of a single quantum. So the velocity of the particle will be disturbed by a larger amount. In other words, the more accurately you try to measure the position of the particle, the less accurately you can measure the speed, and vice versa.

This in it self signals a fundamental shortfall or end in than account of trying to make the world completely deterministic. one certainly cannot predict the future events exactly if one cannot even measure the present state of the universe precisely.

It is this very fact of unpredictability, as an investagoratory postulate which will also be threaded through and represented. The uncertainty principle closely resembles computer random, processes, which in themselves are not random but specific coded signifies for the next possible outcome. The generative random function has a "seed value" associated with it. Each time you reset the seed, the computer generates new random numbers based upon that seed. A given seed value will always generate the same sequence of random numbers. Changing the seed value advances the computer along another different random number sequence.

Def. Uncertainty principle: one can never be exactly sure of both the position and the velocity of a particle; the more accurately one knows the one, the less accurately one can know the other.[25]

Forces

Force carrying particles can be grouped into four categories according to the strength of the force and the particles with which they interact. It should be emphasized that the division into four classes is man made for the representational construction of partial theories.[26]

Gravity- This force is universal, that is, every particle is influenced by the force of gravity, according to its mass or energy. Gravity is the weakest of all the four forces. The important concepts of gravity are they can act over very long distances, and within larger bodies, such as the planets and the sun can add up to produce a significant amount of force.

Electromagnetic Force- This force interacts with electrically charged particles, for example electrons and quarks, but not with uncharged particles. It is about 1042 times stronger than gravity. There are two kinds of charged particles, positive and negative. The force of attraction is between positive and negative particles.[27]

Weak Nuclear Force- This force is principally responsible for radioactivity, which acts on all matter particles of spin 1/2, but not on particles of spin 0,1,2.[28]

Strong Nuclear Force- This force holds the quarks together in the proton and neutron, and holds the protons and neutrons together in the nucleus of the atom. It is also believed that this forced is carried with another spin-1 particle, called the gluon, which interacts only with itself and with quarks. The strong nuclear force has a curious property called confinement: it always binds particles together into combinations that have no color. One cannot have a single quark on its own because it would have a color (red, green, or blue). Instead, a red quark has to be joined to a green and a blue quark by a string of gluons (red + green + blue= white). Such a triplet constitutes a proton or a neutron. Another possibility is a pair consisting of a quark and an antiquark (red + antired, or green + antigreen, or blue + antiblue= white).[29] Such combinations make up the particles known as mesons, which are unstable because the quark and antiquark cam annihilate each other, producing electrons and other particles.

The importance of this explanation is to show not only the rigid scientific, and perhaps complicated structure of the system, but to show the relationships between systems hierarchically. It also begins to develop the contrived system of rules, for a describable operational system. Not so much unlike the department of building- allowing and determining regulations and codes.

Bibliography

Abbott, Edwin, Flatland, New York, Penguin Books, 1952.

Brisson, David, W., Hypergraphics, Visualizing Complex Relationships in Art, Science and Technology, Westview Press, Boulder, 1968.

Bryson, N., Vision and Painting: The Logic of the Gaze, Macmillan, London, 1983.

Fuller, R. Buckminster, Synergetics, Explorations in the Geometry of Thinking, New York, Macmillan Publishing Co., 1975.

Gibson, James, The Senses Considered as Perceptual Systems, Houghton Milfflin, Boston, 1966.

Golubitsky, Martin; Stewart, Ian, Fearful Symmetry- Is God A Geometer?, Cambridge, Blackwell Publishers, 1992.

Hawking, Stephen, W., A Brief History of Time, Batam Books, New York, 1988.

Kappraff, Jay, Connections, New York, McGraw-Hill, Inc., 1990.

Klein, J., Greek Mathematical Though and the Origin of Algebra, MIT Press, Cambridge, 1968.

Kipnis, Jeffrey, 'Architecture Unbound', pg 12-23, AA, London,1985.

Krieger, Martin, Doing Physics, How Physicists Take Hold of the World, Indiana University Press, Indianapolis, 1992.

Krieger, Martin, Marginalism and Discontinuity, Tools for the Craft of Knowledge and Decision, Russel Sage, New York, 1989.

Lerner, Trigg, The Encyclopedia of Physics, VCH Publishers, Newyork, 1991.

Mitchell, William, The Logic of Architecture, MIT Press, Cambridge, 1990.

Morrison, Foster, The Art of Modeling Dynamic Systems, Forecasting for Chaos, Randomness, and Determinism, John Wiley & Sons, Inc., New York, 1991.

Paulos, John-Allen, Beyond Numeracy, Vintage Books, Random House, 1992.

Rotman, Brian, Signifying Nothing, Stanford University Press, Stanford, 1983.

Sartre, Jean-Paul, Being and Nothingness, Washington Square Press, New York, 1956.

Sklar, Lawrence, Space, Time, and Space Time, University of California Press, Berkeley, 1976.

Stewart, Ian, The Problem of Mathematics, Oxford, Oxford University Press, 1992.

Virilio, Paul, The Lost Dimension, Semiotext(e), Columbia University, New York, 1991.

Woolley, Benjamim, Virtual Worlds, Blackwell Publishers, Oxford, 1992.

Intelligent Tutoring, Semi-directed generating architecture.

Jeffrey Krause

11 17 94

A description of the project:

Brief: To explore various abstract and practical tools for conceptual and schematic design. Fundamentally, the intention of the investigation is to explore how architectural elements can be encoded with information, rules, and interactions allowing them to develop autonomously. The purpose is twofold, 01 To question the notion of design and explore how the mind works intuitively. It is an attempt to understand the structure and processes involved in design thinking given very simple methods. 02 To investigate how basic and simple rules, when applied to objects, can produce very unexpected, but consistent, results.

01 Self generating architectural elements

This is a further goal. Hopefully these subsequent algorithms will be beneficial in realizing this as a vision. Essentially the program dissolves the process of building into three abstract constituent elements: Structure, Envelope, and Circulation. The attempt would be to encode enough information into these elements, and to monitor these reactions within an environment. Action and reaction add a temporal aspect to the process; over time these elements have the potential to grow, decay, mutate and combine with other constituent elements to develop architectural sub-component characteristics. These components as aggregates can grow, decay, and mutate. There are two compelling functions for an investigation of this sort. A: To program these elements with meaningful relationships and to evaluate methodically the process of design. B: To potentially add user specified variables and parameters to the project, to reflect individual design intentions, in which the program can begin to make 'informed' design decisions.

02 The problem of discerning boundaries around walls. For instance if walls from the above mentioned project develop, how can that state be recorded and develop into a meaningful architectural trace? The problem is also one of practical investigation. In what methods do we as designers formulate boundaries, and demarcate space? Even more specific, in what ways are these decisions inherently aesthetic ones--especially in a contextless environment?

State of the project:

I have focused most specifically on project 02 in the last few weeks in an attempt to further understand the process of drawing boundaries (sectional or planar characteristics)

Process:

How to draw a wall, or any number of walls.

The first steps of the program draws abstract architectural elements as lines, for instance the program prompts the user to enter the number of structural elements, envelope elements, and circulation elements to be drawn, and prompts the user for how many times each element should grow within the field. The user places the initial points of the elements anywhere within the field. The program then randomly grows the elements orthogonally, this in effect places another point and draws a line from point a to point b. The length of each development is variable and can be set by the user to various tolerances or specific distances (i.e. Structure can grow anywhere between 3-6 units, Circulation grows at a constant 5 units, Envelope grows between 2-8 units.) The units continue to grow the amount specified initially.

Issues on constraints of growth.

As of now there are two ways to continue subsequent growth. Each future growth is directly influenced by the character of growth of an elements counterpart elements.

Element Counterpart

Structure Envelope / Circulation

Circulation Structure / Envelope

Envelope Structure / Circulation

How to draw a boundary, or a polygon around "walls" or elements

There are many ways of drawing a boundary around a finite number of elements, I will first describe the process, describe some examples and expound on some constraints. At this point I am trying to draw orthogonal boundaries around the walls. The program first identifies the elements with which to use as a set for the boundary. A weighted center is found. The walls are characterized as points (i.e. each wall has two points a start and current.) From the center two axis are drawn (vertical and horizontal) demarcating four quadrants, starting from upper right quadrant 01, to upper left quadrant 02, to lower left quadrant 03, to lower right quadrant 04. Points are organized by quadrant, and then subsequently ordered circularly (see attached diagram.) After the points are ordered (1, 2, 3, 4...) there are four options for the program to proceed, this is notated as U,D,S,P; meaning a line can be drawn up, down, straight, or pass to the next point. Up or Down is determined by the points relative position to one another--this is important if adhering to the mandate of orthogonal lines, if drawing a line from a to b, and the type is (u)p or (d)own an intermediate point has to be drawn is determined, a line is drawn from a to int. and from int. to b. For instance if the coordinate for point a is (1,1.5,0) and the coordinate for point b is (2, 3, 0) and the type is (u)p then the intermediate point would be located at (1,3,0) if the type was (d)own the intermediate point would have the coordinate (2, 1.5, 0). A straight algorithm is implemented to determine what points a boundary line can be drawn straight, for instance if point A (1, 1.5, 0) and point B (3, 1.5, 0) a straight line can be drawn between the points without introducing an intermediate point. This is important to eliminate redundant solutions if only working within an (u)p or (d)own environment. All possible solutions for the set are found (within the constraints of the process)

There are a number of possibilities from this juncture, current or future represent if it is possible as of 11 11 94.

Option Type

01 Draw all possibilities current

This includes regular, irregular, crossing, and dangling polygons

02 Draw all regular polygons current

03 Draw regular polygon trying to connect all points current

04 Draw regular polygon with possibility of passing points current

05 Draw regular polygon with maximum area current

06 Draw # solutions with maximum area future

07 Draw # solutions with minimum area future

08 Draw # solutions future

09 Find solutions for specific area future

10 Draw polygon using # lines future

11 ...

It could become interesting with a set of finite lines and to toggle between the above mentioned variables, for example:

draw 10 solutions of regular polygons with maximum areas

or

draw 5 solutions using only using a maximum of 6 lines containing an areas between 10-15 units.

Affordances and constraints:

The technology used is a P.C. 486 66mhz with 8mg ram, and a large hardrive. The programming environment is Auto lisp, and Scheme (both lisp dialects), each have advantages and disadvantages, Auto lisp, ease of graphical output, and autocad functions, has dynamic binding. Scheme has lexical binding, type implementation (put and get) a more advanced Lambda function, direct stack control. Both are fairly slow with the programs running uncompiled.

Part of the problem is that the problem, as it is currently configured, is one of 2n order of growth. This is extraordinarily computationally intensive. The problem has been reconfigured into a

3n-s/s, but it still is slow.

Time:

To find all solutions (boundary) for three walls, takes about 15 seconds. Here is some more information.

State # Points # Straight Pts Passed # Solutions Time

3w, 2g 9 1 3 9 0:0:40

3w, 1g 6 0 1 2 0:0:05

3w, 1g 6 0 1 1 0:0:05

3w, 1g 6 1 0 2 0:0:04

3w, 2g 9 2 1 1 0:0:16

3w, 2g 9 3 0 1 0:0:12

3w, 3g 12 2 1 2 0:6:00

3w, 3g 12 2 4 2 0:1:20

3w, 3g 12 2 5 6 0:1:06

3w, 4g 15 2 6 22 0:7:03

5w, 1g 10 2 0 64 0:1:40 *

1w, 3g 4 2 0 1 0:0:03

8w, 1g 16 15 0 1 0:6:35 *

2w, 1g 4 0 0 5 0:0:03 *

3w, 1g 6 0 0 25 0:0:13

9w, 1g 18 4 5 15 9:04:00 *

The benefits for undertaking a project of this sort are multi-fold.

01 It gives insight into the process for developing multiple

solutions for an viable architectural problem.

02 It allows the user to investigate possibilities he/she

would not have thought of otherwise.

03 It allows a quick graphical interface for the realization

of problems.

04 It has the potential to allow for interaction between the

computer and user at multiple levels.

05 It gives the user potential insights on their own design

techniques through the toggling of the variables and tolerances inside the program. Also, and exploration of the rational and irrational procedures inherently involved in the design process.

Programs, their structure, effects, and consequences.

(this was last updates 10 15 94--it is somewhat old)

aarch.lsp-aarch7.lsp (seven evolution's)

This program is the foundation for the development of the idea, it models n number of generations of structure, envelope, and circulation. The initial develop is abstract, each component only represented by a point, and its development is tracked by a line. The elements are placed by the user-determined quantity within a field for development. The user places the elements anywhere (initially) and also provide x number of iterations. The elements develop randomly, in an orthogonal direction and at a root + random distance. Each element also has an offset rectangle which circumscribes the element--this is explained in smart.lsp. The pure cause for random development is explained further in the description of the program extern.lsp. Aarch7.lsp is specifically written to train the pattern matcher in program extern.lsp.

smart.lsp, smart2.lsp

These two program set parameters for allowing autonomous growth and decay to take place.

The smart.lsp program uses the element offset to determine if the element can continue to grow. Each element has an offset, the program checks whether or not other elements are located within every elements offset. For example a structural element with a rectangular objects contains in its offset set an element circulation and an element envelope, and can continue to grow, if the elements do not fulfill this requirement they decay.

The smart2.lsp program takes three inputs, the existing data set of elements on the screen, a maximum distance value, and a maximum number value. The data set captures the current state of the three elements (structure, circulation, and envelope), locating there positions and proximity's towards other elements. The max-dist value is the maximum distance an element can be away from its two corresponding counterparts, i.e. structure can only be max-dist from any element in both sets of circulation and envelope. If the element does not fulfill this requirement it decays. The max-num value relates to the max-dist value to regulate congestion. For example if max-num value is set to (20), max-dist value set to (15); every element makes a list of all other elements which are closer than 15 units away, if there are more than 20 the element decays.

struct.lsp

This is a program which begins to filter structural elements in two component sets, columns and walls. It is underdeveloped as of yet.

center.lsp

finds the weighted center of all points.

quadrant.lsp

this is the foundation for the boundary program (extern.lsp), it makes an organized list of all of the points radially around the center.

extern.lsp

This is the main program. It first takes the organized points and then develops a skeleton which describes the points and their relationships to adjacent points. It takes the starting point, and compares it with the next (second) node, the third node, and the precursor node. It then compares the other three nodes with the current nodes to describe the distance from the center (weighted node). It also compares the relative relationship of the current node with the other nodes. For example:

("q2" "q2" "q2" "q3" 1 1 0 1 1 0 1 1 1 0 0 0 0 "u")

01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18

Quadrants

01 The quadrant of the current node

02 The quadrant of the previous node

03 The quadrant of the next node

04 The quadrant of the third node

Distance from center (weighted) node

05 Current node > Next node = 1 if #T

06 Current node > Previous node = 1 if #T

07 Current node > Third node = 1 if #T

Positions

08 Current node (x-value) > Next node (x-value) = 1 if #T

09 Current node (y-value) > Next node (y-value) = 1 if #T

10 Current node (x-value) > Previous node (x-value) = 1 if #T

11 Current node (y-value) > Previous node (y-value) = 1 if #T

12 Current node (x-value) > Third node (x-value) = 1 if #T

13 Current node (y-value) > Third node (y-value) = 1 if #T

Similarity

14 Current node (x-value) = Next node (x-value) = 1 if #T

15 Current node (y-value) = Next node (y-value) = 1 if #T

16 Current node (x-value) = Third node (x-value) = 1 if #T

17 Current node (y-value) = Third node (y-value) = 1 if #T

Outcome

18 The actions taken "up" "down" "straight" "pass node up" "pass node down" "pass node straight"

The skeleton case (w/o the outcome) is matched with other trained (stored) skeleton cases - if there is a match the outcome is returned. The process is the same and continues to each node around the circle drawing a perimeter line as it proceeds. A report is printed at the end and another to a file which records the number of nodes in the example, the number of correctly predicted outcomes, and the depth of the training set (see attached sheets).

The most compelling issue about the investigation is that although approached logically, the program does not force a solution based upon "most efficient boundary" or "simplest perimeter" (although it should have those as an option), but does allow the user to derive the training set based upon visual and aesthetic concerns.

Variables

#Structural units

#Circulation units

#Envelope units

#Hi_var_str, circ, env

#Lo_var_str, circ, env

#Cons_var_str, circ, env

more.....

Definition of terms:

Grow: One generation of a single constituent architectural element, this element 'grows' abstractly by duplicating itself and moving in any number of directions.

Decay: The resistance to growth, can also be seen as temporarily dormant, not activated by other adjacent objects.

Single Aggregate: A unit having all three architectural elements, structure, envelope, circulation.

Regular Polygon: Either a concave, or convex polygon, not regular in the geometric definition of having the properties of equilateral and

equidistant.

Dangling Polygon: A non-regular polygon which draws over itself one or more times.

Crossing Polygon: A semi-regular polygon which subsequently looks like two or more polygons, in which the lines cross.

Skeleton Case: An instantiation of the current state of a set of points. Whereby the exact position is not recorded only relationships are recorded.

Filter: Any number of algorithms to search and select through some type of object set.

Line: A wall, with two points.

Point: The smallest base object of individual architectural elements.

U,D,S,P: The convention for drawing boundaries--representing Up, Down, Straight, Pass, in reference to connecting two points.

Algorithm: Any set of instructions; a procedural set of rules for the computer.

Pattern Match: In this case it is the matching of states rather than the matching of objects.

Center: The weighted center of a given set of points, derived through the averaging of all x-coordinates, and y-coordinates.

Field: The region in which development takes place.

Offset: A 2-dimension region derived from a one-dimensional line.