Saturday, January 25, 2020

Speech Enhancement And De Nosing By Wavelet Thresholding And Transform Ii Computer Science Essay

Speech Enhancement And De Nosing By Wavelet Thresholding And Transform Ii Computer Science Essay In this project the experimenter will seek to design and implement techniques in order to denoise a noisy audio signal using the MATLAB software and its functions, a literature review will be done and summarized to give details of the contribution to the area of study. Different techniques that have been used in the audio and speech processing procedure will be analyzed and studied. The implementation will be done using MATLAB version 7.0. Introduction The Fourier analysis of a signal can be used as a very powerful tool; it can perform the functions of obtaining the frequency component and the amplitude component of signals. The Fourier analysis can be used to analyze components of stationary signals, these are signals that repeat, signals that are composed of sine and cosine components, but in terms of analyzing non stationary signals, these are signals that have no repetition in the region that is sampled, the Fourier transform is not very efficient. Wavelet transform on the other hand allows for these signals to be analyzed. The basic concept behind wavelets is that a signal can be analyzed by splicing it into different components and then these components are studied individually. In terms of their frequency and time, in terms of Fourier analysis the signal is analyzed in terms of its sine and cosine components but when a wavelet approach is adapted then the analysis is different, the wavelet algorithm employes a process and an alyzed the data on different scales and resolution as compared to Fourier analysis. In using the wavelet analysis, a type of wavelet, referred to as being the mother wavelet is used as the main wavelet type for analysis; analysis is then performed from the mother wavelet that is of higher frequency. From the Fourier analysis the frequency analysis of the signal is done with a simplified form of the mother wavelet, from the wavelet components that are achieved via this process further analysis can be done on these coefficients. Haar wavelet types are very compact and this is one of their defining features, its compact ability, as the interval gets so large it then starts to vanish, but the Haar wavelets have a major limiting factor they are not continuously differentiable. In the analysis of a given signal the time domain component can be used in the analysis of the frequency component of that signal, this concept is the Fourier transform, where a signal component is translated to th e frequency domain from a time domain function, the analysis of the signal for its frequency component can now be done, and based of Fourier analysis this is possible because this analysis incorporates the cosine and sine of the frequency. Based on the Fourier transform a finite set of sampled points are analyzed this results in the discrete Fourier transforms, these sample points are typical to what the original signal looks like, to gather the approximate function of a sample, and the gathering of the integral, by the implementation of the discrete Fourier transforms. This is realized by the use of a matrix, the matrix contains an order of the total amount of points of sample,  the problem encountered worsens as the number of samples are increased. If there is uniform spacing between the samples then it is possible to factor in the Fourier matrix into the, multiplication of a few matrices, the results of this can be subjected to a vector of an order of the form m log m operation s, the result of this know as the Fast Fourier Transform. Both Fourier transforms mentioned above are linear transforms. The transpose of the FFT and the DWT is what is referred to as the inverse transform matrix and they can be cosine and sine, but in the wavelet domain more complex mother wavelet functions are formed. The domain of analysis in the Fourier transforms are the sine and cosine, but as it regards to wavelets there exist a more complex domain function called wavelets, mother wavelets are formed. The functions are localized functions, and are set in the frequency domain, can be seen in the power spectra. This proves useful in finding the frequency and power distribution. Based on the fact that wavelet transforms are transforms that are localized as compared to Fourier functions that are not, the Fourier function being mentioned are the sine and cosine, this feature of wavelet makes it a useful candidate in the purpose of this research, this feature of wavelets makes oper ations using wavelets transform sparse and this is useful when used for noise removal. A major advantage of using wavelets is that the windows vary. A major application of this is to realize the portions and signals that are not continuous having short wavelet functions is a good practice to overcome this, but to obtain more in depth analysis having longer functions are best. A practice that is utilized is having basis functions that are of short high frequency and basis functions that are of long low frequency (A. Graps, 1995-2004), point to note Is that unlike Fourier analysis that have a limited basis function sine and cosine wavelets have unlimited set of basis functions . This is a very important feature as it allows wavelet to identify information from a signal that can be hidden by other time frequency methods, namely Fourier analysis. Wavelets consist of different families within each family of wavelet there exist different subclasses that are differentiated based on the coefficients that are decomposed and their levels of iteration, wavelets are mostly classified based on their number of coefficients, that is also referred to as their vanishing moments, a mathematical relationship relates both. Fig above showing examples of wavelets (N. Rao 2001) One of the most helpful and defining features of using wavelets is that the experimenter has control over the wavelet coefficients for a wavelet type. Families of wavelets were developed that proved to be very efficient in the representation of polynomial behavior the simplest of these is the Haar wavelet. The coefficients can be thought of as being filters; these are then placed in a transformation matrix and applied to a raw data vector. The different coefficients are ordered with patterns that work as a smoothing filter and another pattern whose function is to realize the detail information of the data (D. Aerts and I. Daubechies 1979). The coefficient matrix for the wavelet analysis is then applied in a hierarchical algorithm, based on its arrangement odd rows contain the different coefficients, the coefficients will be acting as filters that perform smoothing and the rows that are even will have the coefficients of the wavelets that contains the details from the analysis, it is to the full length data the matrix is first applied, it is then smoothed and disseminated by half after this process the step is repeated with the matrix., where more smoothing takes place and the different coefficients are halved, this process is repeated several times until the data that remains is smoothed, what this process actually does is to bring out the highest resolutions from that data source and data smoothing is also performed. In the removal of noise from data wavelet applications have proved very efficient and successful, as can be seen in work done by David Donoho, the process of noise removal is called wavelet shrinkage and thresholding. When data is decomposed using wavelets, actually filters are used as averaging filters while the other produce details, some of the coefficients will relate to some details of the data set and if a given detailed is small, it can then be removed from the data set without affecting any major feature as it relates to the data. The basi c idea of thresholding is setting coefficients that are at a particular threshold or less than a particular threshold to zero, these coefficients are then later used in an inverse wavelet transform to reconstruct the data set (S. Cai and K. Li, 2010) Literature Review The work done by Student Nikhil Rao (2001) was reviewed, according to the work that was done a completely new algorithm was developed that focused on the compression of speech signals, based on techniques for discrete wavelet transforms. The MATLAB software version 6 was used in order to simulate and implement the codes. The steps that were taken to achieve the compression are listed below; Choose wavelet function Select decomposition level Input speech signal Divide speech signal into frames Decompose each frame Calculate thresholds Truncate coefficients Encode zero-valued coefficients Quantize and bit encode Transmit data frame Parts of extract above taken from said work by Nikhil Rao (2001). Based on the experiment that was conducted the Haar and Daubechies wavelets were utilized in the speech coding and synthesis the functions that were used that are a function of the MATLAB suite are as follows; dwt, wavedec, waverec, and idwt, they were used in computing the wavelet transforms Nikhil Rao (2001). The wavedec function performs the task of signal decomposition, and the waverec function reconstructs the signal from its coefficients. The idwt function functions in the capacity of the inverse transform on the signal of interest and all these functions can be found in the MATLAB software. The speech file that was analyzed was divided up into frames of 20 ms, which is 160 samples per frame and then each frame was decomposed and compressed, the file format utilized was .OD files, because of the length of the files there were able to be decomposed without being divided up into frames. The global and by-level thre sholding was used in the experiment, the main aim of the global thresholding is the maintenance of the coefficients that are the largest, this not being dependent on the size of the decomposition tree for the wavelet transform. Using the level thresholding the approximate coefficients are kept at the decomposition level, during the process two bytes are used to encode the zero values. The function of the very first byte is the specification of the starting points of zeros and the other byte tracks successive zeros. The work done by Qiang Fu and Eric A. Wan (2003) was also reviewed; there work was the enhancement of speech based on wavelet de-nosing framework. In their approach to their objective, the noisy speech signal was first processed using a spectral subtraction method; the aim of this involves the removal of noise from the signal of study before the application of the wavelet transform. The traditional approach was then done where the wavelet transforms are utilized in the decomposition of the speech into different levels, thresholding estimation is then on the different levels , however in this project a modified version on the Ephraim/Malah suppression rule was utilized for the thresholdign estimates. To finally enhance the speech signal the inverse wavelet transform was utilized. It was shown the pre processing of the speech signal removed small levels of noise but at the same time the distortion of the original speech signal was minimized, a generalized spectral subtraction algorithm was used to accomplish the task above this algorithm was proposed by Bai and Wan. The wavelets transform for this approach utilized using wavelet packet decomposition, for this process a six stage tree structure decomposition approach was taken this was done using a 16-tap FIR filter, this is derived from the Daubechies wavelet, for a speech signal of 8khz the decomposition that was achieved resulted in 18 levels. The estimation method that was used to calculate the threshold levels were of a new type, the experiments took into account the noise deviation for the different levels, and each different time frame . An altered version of the Ephraim/Malah rule for suppression was used to achieve soft thresholdeing. The re-synthesis of the signal was done using the inverse perceptual wavelet transform and this is the very last stage. Work done by S.Manikandan, entitled (2006) focused on the reduction of noise that is present in a wireless signal that is received using special adaptive techniques. The signal of interest in the study was corrupted by white noise. The time frequency dependent threshold approach was taken to estimate the threshold level, in this project both the hard and soft thresholding techniques were utilized in the de-noising process. As with the hard thresholding coefficient below a certain values are scaled, in the project a universal threshold was used for the Gaussian noise that was added the error criterion that was used was under 3 mean squared, based on the experiments that were done it was found out that this approximation is not very efficient when it comes to speech, this is mainly because of poor relations amongst the quality and the existence to the correlated noise. A new thresholding technique was implemented in this technique the standard deviation of the noise was first estimated of the different levels and time frames. For a signal the threshold is calculated and is also calculated for the different sub-band and their related time frame. The soft thresholding was also implemented, with a modified Ephraim/Malah suppression rule, as seen before in the other works that were done in this are. Based on their results obtained, there was an unnatural voice pattern and to overcome this, a new technique based on modification from Ephraim and Mala is implemented. Procedure The procedure that undertaken involved doing several voice recording and reading the file using the wavread function because the file was done in a .wav format The length to be analyzed was decided, for the my project the entire length of the signal was analyzed The uncorrupted signal power and signal to noise ratio (SNR) was calculated using different MATLAB functions Additive White Gausian Noise (AWGN) was then added to the original recorded, making the uncorrupted signal now corrupted The average power of the signal corrupted by noise and also the signal to noise ratio (SNR) was then calculated Signal analysis then followed, the procedure involved in the signal analysis included: The wavedec function in MATLAB was used in the decomposition of the signal. The detail coefficients and approximated coefficients were then extracted and plots made to show the different levels of decomposition The different levels of coefficient were then analyzed and compared, making detailed analysis that the decomposition resulted in After decomposition of the different levels de-nosing took place this was done with the ddencmp function in MATLAB, The actual de-nosing process was then undertaken using wdencmp function in MATLAB, plot comparison was made to compare the noise corrupted signal and the de-noised signal The average power and SNR of the de-noised signal was done and comparison made between it and the original and the de-noised signal. Implementation/Discussion The first part of the project consisted of doing a recording in MATLAB, a recording was done of my own voice and the default sample rate was used were Fs = 11025, codes were used to do recordings in MATLAB and different variables were altered and specified based on the codes used, the m file that is submitted with this project gives all the codes that were utilized for the project, the recordings were done for 9 seconds the wavplay function was then used to replay the recording that was done until a desired recording was obtained after the recording was done a wavwrite function was then used to store the data that was previously recorded into a wav file. The data that was written into a wav file was originally stored in variable y and then given the name recording1. A plot was then made to show the wave format of the speech file recorded. Fig 1 Fig1 Plot above showing original recording without any noise corruption According to fig1 the maximum amplitude of the signal is +0.5 and the minimum amplitude being -0.3 from observation with the naked eye it can be seen that most of the information in the speech signal is confined between the amplitude +0.15 -0.15. The power of the speech signal was then calculated in MATLAB using a periodogram spectrum this produces an estimate of the spectral density of the signal and is computed from the finite length digital sequence using the Fast Fourier Transform (The MathWorks 1984-2010) the window parameter that was used was the Hamming window, the window function is some function that is zero outside some chosen interval. The hamming window is a typical window function and is applied typically by a point by point multiplication to the input of the fast fourier transform, this controls the adjacent levels of spectral artifacts which would appear in the magnitude of the fast fourier transform results, for a case where the input frequencies do not correspond with the bin center. Convolution that occurs within the frequency domain can be considered as windowing this is basically the same as performing multiplication within the time domain, the result of this multiplication is that any samples outside a fr equency will affect the overall amplitude of that frequency. Fig2 Fig2 plot showing periodogram spectral analysis of original recording From the spectral analysis it was calculated that the power of the signal is 0.0011 watt After the signal was analyzed noise was added to the signal, the noise that was added was additive gaussian white noise (AWGN), and this is a random signal that contains a flat power spectral density (Wikipedia, 2010). At a given center frequency additional white noise will contain equal power at a fixed bandwidth; the term white is used to mean that the frequency spectrum is continuous and is also uniform for the entire frequency band. In the project additive is used to simply mean that this impairment to the original signal is corrupting the speech; The MATLAB code that was used to add the noise to the recording can be seen in the m file. For the very first recording the power in the signal was set to 1 watt and the SNR set to 80, the applied code was set to signal z, which is a copy of the original recording y, below is the plot showing the analysis of the noise corrupted recording. Fig3 Fig3 plot showing the original recording corrupted by noise Based on observation of the plot above it can be estimated that information in the original recording is masked by the additive white noise to the signal, this would have a negative effect as the clean information would be masked out by the noise, a process known as aliasing. Because the amplitude of the additive noise is greater than the amplitude of the recording it causes distortion observation of the graph shows the amplitude of the corrupted signal is greater than the original recording. The noise power of the corrupted signal was calculated buy the division of the signal power and the signal to noise ratio, the noise power calculated from the first recording is 1.37e-005. The noise power of the corrupted signal is 1.37e-005; the spectrum peridodogram was then used to calculate the average power of the corrupted signal , based on the MATLAB calculations the power was calculated to be 0.0033 watt Fig4 Fig4 plot showing periodogram spectral analysis of corrupted signal From analysis of the plot above it can be seen that the frequency of the corrupted signal spans a wider band, the original recording spectral frequency analysis showed a value of -20Hz as compared to the corrupted signal showed a value of 30Hz this increase in the corrupted signal is attributed to the noise added and this masked out the original recording again as before the process of aliasing. It was seen that the average power of the corrupted was greater than the original signal, the increase in power can be attributed to the additive noise added to the signal this caused the increase in power of the signal. The signal to noise ratio (SNR) of the corrupted signal was calculate from the formula corrupted power/noise power , and the corrupted SNR was found to be 240 as compared to 472.72 of the de-noised, the decrease in signal to noise ratio can be attributed to the additive noise this resulted in the level of noise to the level of clean recording to be greater this is the basis for the decreased SNR in the corrupted signal, the increase in the SNR in the clean signal will be discussed further in the discussion. The reason there was a reduce in the SNR in the corrupted signal is because the level of noise to clean signal is greater and this is basis of signal to noise comparison, it is used to measure how much a signal is corrupted by noise and the lower this ratio is, the more corrupted a signal will be. The calculation method that was used to calculate this ratio is Where the different signal and noise power were calculated from MATLAB as seen above The analysis of the signal then commenced a .wav file was then created for the corrupted signal using the MATLAB command wavwrite, with Fs being the sample frequency, N being the corrupted file and the name being noise recording, a file x1 that was going to be analysed was created using the MATLAB command wavread. Wavelet multilevel decomposition was then performed on the signal x1 using the MATLAB command wavedec, this function performs the wavelet decomposition of the signal, the decomposition is a multilevel one dimensional decomposition, and discrete wavelet transform (DWT) is using pyramid algorithms, during the decomposition the signal is passed through a high pass and a low pass filter. The output of the low pass is further passed through a high pass and a low pass filter and this process continues (The MathWorks 1994-2010) based on the specification of the programmer, a linear time invariant filter, this being a filter that passes high frequencies and attenuates frequency that are below a threshold called the cut off frequency, the rate of attenuation is specified by the designer. While on the other hand the opposite to the high pass filter, is the low pass filter this filter will only pass low frequency signals but attenuates signal that contain a higher frequency than the cut off. Ba sed on the decomposition procedure above the process was done 8 times, and at each level of decomposition the actual signal is down sampled by a factor of 2. The high pass output at each stage represents the actual wavelet transformed data; these are called the detailed coefficients (The MathWorks 1994-2010). Fig 5 Fig 5 above levels decomposition (The MathWorks 1994-2010) Block C above contains the decomposition vectors and Block L contains the bookkeeping vector, based on the representation above a signal X of a specific length is decomposed into coefficients, the first part of the decomposition produces 2 sets of coefficients the approximate coefficient cA1 and the detailed coefficient cD1, to get the approximate coefficient the signal x is convolved with low pass filter and to get the detailed coefficient signal x is convolved with a high pass filer. The second stage is similar only this time the signal that will be sampled is cA1 as compared to x before with the signal further being sampled through high and low pass filter again to produce approximate and detailed coefficients respectively hence the signal is down sampled and the factor of down sampling is two The algorithm above (The MathWorks 1994-2010) represents the first level decomposition that was done in MATLAB, the original signal x(t) is decomposed into approximate and detailed coefficient, the algorithm above represents the signal being passed through a low pass filter where the detail coefficients are extracted to give D2(t)+D1(t) this analysis is passed through a single stage filter bank further analysis through the filter bank will produce greater stages of detailed coefficients as can be seen with the algorithm below (The MathWorks 1994-2010). The coefficients,  cAm(k)  and  cDm(k)  form  m = 1,2,3  can be calculated by iterating or cascading the single stage filter bank to obtain a multiple stage filter bank(The MathWorks 1994-2010). Fig6 Fig6 showing graphical representation of multilevel decomposition (The MathWorks 1994-2010) At each level it is observed the signal is down sampled and the sampling factor is 2. At d8 obeservation shows that the signal is down sampled by 2^8 i.e. 60,000/2^8. All this is done for better frequency resolution. Lower frequencies are  present  at all time; I am mostly concerned with higher frequencies which contains the actual data. I have used daubechies wavelet type 4 (db4), the daubechies wavelet are defined by computing the running averages and differences via scalar products with scaling signals and wavelets(M.I. Mahmoud, M. I. M. Dessouky, S. Deyab, and F. H. Elfouly, 2007) For this type of wavelet there exist a balance frequency response but the phase response is non linear. The Daubechies wavelet types uses windows that overlap in order to ensure that the coefficients of higher frequencies will show any changes in their high frequency, based on these properties the Daubechies wavelet types proves to be an efficient tool in the de-nosing and compression of audio signals.  For the Daubechies D4 transform, this transform has 4 wavelet types and scaling coefficient functions, these coefficient functions are shown below The different steps that are involved in the wavelet transforms, will utilize different scaling functions, to the signal of interest if the data being analyzed contains a value of N, the scaling function that will be applied will be applied to calculate N/2 smoothed values. The smoothed values are stored in the lower half of the N element input vector for the ordered wavelet transform. The wavelet function coefficient values are g0  = h3 g1  = -h2 g2  = h1 g3  = -h0 The different scaling function and wavelet function are calculated using the inner product of the coefficients and the four different data values. The equations are shown below (Ian Kaplan, July 2001); The repetition of the of the steps of the wavelet transforms was then used in the calculation of the function value of the wavelet and the scaling function value, for each repetition there is an increase by two in the index and when this occurs a different wavelet and scaling function is produced. Fig 7 Diagram above showing the steps involved in forward transform (The MathWorks 1994-2010) The diagram above illustrates steps in the forward transform, based on observation of the diagram it can be seen that the data is divided up into different elements, these separate elements are even and the first elements are stored to the even array and the second half of the elements are stored in the odd array. In reality this is folded into a single function even though the diagram above goes against this, the diagrams shows two normalized steps. The input signal in the algorithm above (Ian Kaplan, July 2001) is then broken down into what are called wavelets. One of the most significant benefits of use of wavelet transforms is the fact that it contains a window that varies, to identify signal not continuous having base functions that are short is most desirable. But in order to obtain detailed frequency analysis it is better to have long basis function. A good way to achieve this compromise is having a short high frequency functions and also long low frequency ones(Swathi Nibhanupudi, 2003) Wavelet analysis contains an infinite basis functions, this allows wavelet transforms and analyisis with the ability realize cases that can not be easily realized by other time frequency methods, namely Fourier transforms. MATLAB codes are then used to extract the detailed coefficients, the m file shows these codes, the detailed coefficients that are Daubechies orthogonal type wavelets D2-D20are often used. The numbers of coefficients are represented by the index number, for the different wavelets they contain vanishing moments that are identical to the halve of the coefficients. This can be seen using the orthogonal types where D2 contain only one moment and D4 two moments and so on, the vanishing moment of the wavelets refers to its ability to represent the information in a signal or the polynomial behavior. The D2 type that contains only one moment will encode polynomial of one coefficient easily that are of constant signal component. The D4 type will encode polynomial of two coefficients, the D6 will encode coefficient of three polynomial and so on. The scaling and wavelet function have to be normalized and this normalization factor is a factor  Ã‚  . The coefficients for the wavelet are derived by the reverse of the order of the scaling function coefficients and then by reversing the sign of the second one (D4 wavelet = {-0.1830125, -0.3169874, 1.1830128, -0.6830128}) mathematically, this looks like   where  k  is the coefficient index,  b  is a wavelet coefficient and  c  a scaling function coefficient.  N  is the wavelet index, ie 4 for D4 (M. Bahoura, J. Bouat. 2009) Fig 7 Plot of fig 7 showing approximated coefficient of the level 8 decomposition Fig 8 Plot of fig 8 showing detailed coefficient of the level 1 decomposition Fig 9 Plot of fig 9 showing approximated coefficient of the level 3 decomposition Fig 10 Plot of fig 10 showing approximated coefficient of the level 5 decomposition Fig 11 Plot of fig 11, showing comparison of the different levels of decomposition Fig12 Plot fig12 showing the details of all the levels of the coefficients; The next step in the de-nosing process is the actual removal of the noise after the coefficients have been realized and calculated the MATLAB functions that are used in the de-noising functions are the ddencmp and the wdencmp function This process actually removes noise by a process called thresholding, De-noising, the task of removing or suppressing uninformative noise from signals is an important part of many signal or image processing applications. Wavelets are common tools in the field of signal processing. The popularity of wavelets in de-nosingis largely due to the computationally efficient algorithms as well as to the sparsity of the wavelet representation of data. By sparsity I mean that majority of the wavelet coefficients have very small magnitudes whereas only a small subset of coefficients have large magnitudes. I may informally state that this small subset contains the interesting informative part of the signal, whereas the rest of the coefficients describe noise and can be discarded to give a noise-free reconstruction. The best known wavelet de-noising methods are thresholding approaches, see e.g. In hard thresholding all the coefficients with greater magnitudes as compared to the threshold are retained unmodified this is because they comprise the informative part of data, while the rest of the coefficients are considered to represent noise and set to zero. However, it is reasonable to assume that coefficients are not purely either noise or informative but mixtures of those. To cope with this soft thresholding approaches have been proposed, in the process of soft thresholding coefficients that are smaller than the threshold are made zero, however the coefficients that are kept are made smaller towards zero by an amount of the threshold value in order to decrease the effect of noise assumed to corrupt all the wavelet coefficients. In my project I have chosen to do a eight level decomposition before applying the de-nosing algorithm, the decomposition levels of the different eight levels are obtained, because the signal of in

Friday, January 17, 2020

Poetics by Aristotle Essay

Aristotle’s most famous contribution to logic is the syllogism, which he discusses primarily in the Prior Analytics. A syllogism is a three-step argument containing three different terms. A simple example is â€Å"All men are mortal; Socrates is a man; therefore, Socrates is mortal. † This three-step argument contains three assertions consisting of the three terms Socrates,man, and mortal. The first two assertions are called premises and the last assertion is called the conclusion; in a logically valid syllogism, such as the one just presented, the conclusion follows necessarily from the premises. That is, if you know that both of the premises are true, you know that the conclusion must also be true. Aristotle uses the following terminology to label the different parts of the syllogism: the premise whose subject features in the conclusion is called theminor premise and the premise whose predicate features in the conclusion is called the major premise. In the example, â€Å"All men are mortal† is the major premise, and since mortal is also the predicate of the conclusion, it is called the major term. Socrates† is called the minor term because it is the subject of both the minor premise and the conclusion, and man, which features in both premises but not in the conclusion, is called the middle term. In analyzing the syllogism, Aristotle registers the important distinction between particulars and universals. Socrates is a particular term, meaning that the word Socrates names a particular person. By contrast, man andmortal are universal terms, meaning that they name general categories or qualities that might be true of many particulars. Socrates is one of billions of particular terms that falls under the universal man. Universals can be either the subject or the predicate of a sentence, whereas particulars can only be subjects. Aristotle identifies four kinds of â€Å"categorical sentences† that can be constructed from sentences that have universals for their subjects. When universals are subjects, they must be preceded by every, some, or no. To return to the example of a syllogism, the first of the three terms was not just â€Å"men are mortal,† but rather â€Å"all men are mortal. † The contrary of â€Å"all men are mortal† is â€Å"some men are not mortal,† because one and only one of these claims is true: they cannot both  be true or both be false. Similarly, the contrary of â€Å"no men are mortal† is â€Å"some men are mortal. † Aristotle identifies sentences of these four forms—â€Å"All X is Y,† â€Å"Some X is not Y,† â€Å"No X is Y,† and â€Å"Some X is Y†Ã¢â‚¬â€as the four categorical sentences and claims that all assertions can be analyzed into categorical sentences. That means that all assertions we make can be reinterpreted as categorical sentences and so can be fit into syllogisms. If all our assertions can be read as premises or conclusions to various syllogisms, it follows that the syllogism is the framework of all reasoning. Any valid argument must take the form of a syllogism, so Aristotle’s work in analyzing syllogisms provides a basis for analyzing all arguments. Aristotle analyzes all forty-eight possible kinds of syllogisms that can be constructed from categorical sentences and shows that fourteen of them are valid. In On Interpretation,Aristotle extends his analysis of the syllogism to examine modal logic, that is, sentences containing the words possibly ornecessarily. He is not as successful in his analysis, but the analysis does bring to light at least one important problem. It would seem that all past events necessarily either happened or did not happen, meaning that there are no events in the past that possibly happened and possibly did not happen. By contrast, we tend to think of many future events as possible and not necessary. But if someone had made a prediction yesterday about what would happen tomorrow, that prediction, because it is in the past, must already be necessarily true or necessarily false, meaning that what will happen tomorrow is already fixed by necessity and not just possibility. Aristotle’s answer to this problem is unclear, but he seems to reject the fatalist idea that the future is already fixed, suggesting instead that statements about the future cannot be either true or false. Organon: The Structure of Knowledge Summary The Categories, traditionally interpreted as an introduction to Aristotle’s logical work, divides all of being into ten categories. These ten categories are as follows: Substance, which in this context means what something is essentially (e. g. , human, rock) * Quantity (e. g. , ten feet, five liters) * Quality (e.g. , blue, obvious). * Relation (e. g. , double, to the right of) * Location (e. g. , New York, home plate) * Time (e. g. , yesterday, four o’clock) * Position (e. g. , sitting, standing) * Possession (e. g. , wearing shoes, has a blue coat) * Doing (e. g. , running, smiling) * Undergoing (e. g. , being run into, being smiled at) Of the ten, Aristotle considers substance to be primary, because we can conceive of a substance without, for example, any given qualities but we cannot conceive of a quality except as it pertains to a particular substance. One important conclusion from this division into categories is that we can make no general statements about being as a whole because there are ten very different ways in which something can have being. There is no common ground between the kind of being that a rock has and the kind of being that the color blue has. Aristotle’s emphasis on the syllogism leads him to conceive of knowledge as hierarchically structured, a claim that he fleshes out in the Posterior Analytics. To have knowledge of a fact, it is not enough simply to be able to repeat the fact. We must also be able to give the reasons why that fact is true, a process that Aristotle calls demonstration. Demonstration is essentially a matter of showing that the fact in question is the conclusion to a valid syllogism. If some truths are premises that can be used to prove other truths, those first truths are logically prior to the truths that follow from them. Ultimately, there must be one or several â€Å"first principles,† from which all other truths follow and which do not themselves follow from anything. However, if these first principles do not follow from anything, they cannot count as knowledge because there are no reasons or premises we can give to prove that they are true. Aristotle suggests that these first principles are a kind of intuition of the universals we recognize in experience. Aristotle believes that the objects of knowledge are also structured hierarchically and conceives of definition as largely a process of division. For example, suppose we want to define human. First, we note that humans are animals, which is the genus to which they belong. We can then take note of various differentia, which distinguish humans from other animals. For example, humans walk on two legs, unlike tigers, and they lack feathers, unlike birds. Given any term, if we can identify its genus and then identify the differentia that distinguish it from other things within its genus, we have given a definition of that term, which amounts to giving an account of its nature, or essence. Ultimately, Aristotle identifies five kinds of relationships a predicate can have with its subject: a genus relationship (â€Å"humans are animals†); a differentia relationship (â€Å"humans have two legs†); a unique property relationship (â€Å"humans are the only animals that can cry†); a definition, which is a unique property that explains the nature or essence of the subject; and an accident relationship, such as â€Å"some humans have blue eyes,† where the relationship does not hold necessarily. While true knowledge is all descended from knowledge of first principles, actual argument and debate is much less pristine. When two people argue, they need not go back to first principles to ground every claim but must simply find premises they both agree on. The trick to debate is to find premises your opponent will agree to and then show that conclusions contrary to your opponent’s position follow necessarily from these premises. The Topicsdevotes a great deal of attention to classifying the kinds of conclusions that can be drawn from different kinds of premises, whereas the Sophistical Refutations explores various logical tricks used to deceive people into accepting a faulty line of reasoning. Physics: Books 1-4. The Physics opens with an investigation into the principles of nature. At root, there must be a certain number of basic principles at work in nature, according to which all natural processes can be explained. All change or process involves something coming to be from out of its opposite. Something comes to be what it is by acquiring its distinctive form—for example, a baby becomes an adult, a seed becomes a mature plant, and so on. Since this the baby or the seed were working toward this form all along, the form itself (the idea or pattern of the mature specimen) must have existed before the baby or seed actually matured. Thus, the form must be one of the principles of nature. Another principle of nature must be the privation or absence of this form, the opposite out of which the form came into being. Besides form and privation, there must be a third principle, matter, which remains constant throughout the process of change. If nothing remains unchanged when something undergoes a change, then there would be no â€Å"thing† that we could say underwent the change. So there are three basic principles of nature: matter, form, and privation. For example, a person’s education involves the form of being educated, the privation of being ignorant, and the underlying matter of the person who makes the change from ignorance to education. This view of the principles of nature resolves many of the problems of earlier philosophers and suggests that matter is conserved: though its form may change, the underlying matter involved in changes remains constant. Change takes place according to four different kinds of cause. These causes are closer to what we might call â€Å"explanations†: they explain in different ways why the change came to pass. The four causes are (1) material cause, which explains what something is made of; (2) formal cause, which explains the form or pattern to which a thing corresponds; (3) efficient cause, which is what we ordinarily mean by â€Å"cause,† the original source of the change; and (4) final cause, which is the intended purpose of the change. For example, in the making of a house, the material cause is the materials the house is made of, the formal cause is the architect’s plan, the efficient cause is the process of building it, and the final cause is to provide shelter and comfort. Natural objects, such as plants and animals, differ from artificial objects in that they have an internal source of change. All the causes of change in artificial objects are found outside the objects themselves, but natural objects can cause change from within. Aristotle rejects the idea that chance constitutes a fifth cause, similar in nature to the other four. We normally talk about chance in reference to coincidences, where two separate events, which had their own causes, coincide in a way that is not explained by either set of causes. For instance, two people might both have their own reasons for being in a certain place at a certain time, but neither of these sets of reasons explains the coincidence of both people being there at the same time. Final causes apply to nature as much as to art, so everything in nature serves a useful purpose. Aristotle argues against the views both of Democritus, who thinks that necessity in nature has no useful purpose, and of Empedocles, who holds an evolutionary view according to which only those combinations of living parts that are useful have managed to survive and reproduce themselves. If Democritus were right, there would be as many useless aspects of nature as there are useful, while Empedocles’ theory does not explain how random combinations of parts could come together in the first place. Books III and IV examine some fundamental concepts of nature, starting with change, and then treating infinity, place, void, and time. Aristotle defines change as â€Å"the actuality of that which exists potentially, in so far as it is potentially this actuality. † That is, change rests in the potential of one thing to become another. In all cases, change comes to pass through contact between an agent and a patient, where the agent imparts its form to the patient and the change itself takes place in the patient. Either affirming or denying the existence of infinity leads to certain contradictions and paradoxes, and Aristotle finds an ingenious solution by distinguishing between potential and actual infinities. He argues that there is no such thing as an actual infinity: infinity is not a substance in its own right, and there are neither infinitely large objects nor an infinite number of objects. However, there are potential infinities in the sense that, for example, an immortal could theoretically sit down and count up to an infinitely large number but that this is impossible in practice. Time, for example, is a potential infinity because it potentially extends forever, but no one who is counting time will ever count an infinite number of minutes or days. Aristotle asserts that place has a being independent of the objects that occupy it and denies the existence of empty space, or void. Place must be independent of objects because otherwise it would make no sense to say that different objects can be in the same place at different times. Aristotle defines place as the limits of what contains an object and determines that the place of the earth is â€Å"at the center† and the place of the heavens as â€Å"at the periphery. † Aristotle’s arguments against the void make a number of fundamental errors. For example, he assumes that heavier objects fall faster than lighter ones. From this assumption, he argues that the speed of a falling object is directly proportional to an object’s weight and inversely proportional to the density of the medium it travels through. Since the void is a medium of zero density, that would mean that an object would fall infinitely fast through a void, which is an absurdity, so Aristotle concludes that there cannot be such a thing as a void. Aristotle closely identifies time with change. We register that time has passed only by registering that something has changed. In other words, time is a measure of change just as space is a measure of distance. Just as Aristotle denies the possibility of empty space, or void, Aristotle denies the possibility of empty time, as in time that passes without anything happening. Physics: Books 5-8 Summary There are three kinds of change: generation, where something comes into being; destruction, where something is destroyed; and variation, where some attribute of a thing is changed while the thing itself remains constant. Of the ten categories Aristotle describes in the Categories (see previous summary of the Organon), change can take place only in respect of quality, quantity, or location. Change itself is not a substance and so it cannot itself have any properties. Among other things, this means that changes themselves cannot change. Aristotle discusses the ways in which two changes may be the same or different and argues also that no two changes are opposites, but rather that rest is the opposite of change. Time, space, and movement are all continuous, and there are no fundamental units beyond which they cannot be divided. Aristotle reasons that movement must be continuous because the alternative—that objects make infinitesimally small jumps from one place to another without occupying the intermediate space—is absurd and counterintuitive. If an object moves from point A to point B, there must be a time at which it is moving from point A to point B. If it is simply at point A at one instant and point B at the next, it cannot properly be said to have moved from the one to the other. If movement is continuous, then time and space must also be continuous, because continuous movement would not be possible if time and space consisted of discrete, indivisible atoms. Among the connected discussions of change, rest, and continuity, Aristotle considers Zeno’s four famous paradoxes. The first is the dichotomy paradox: to get to any point, we must first travel halfway, and to get to that halfway point, we must travel half of that halfway, and to get to half of that halfway, we must first travel a half of the half of that halfway, and so on infinitely, so that, for any given distance, there is always a smaller distance to be covered first, and so we can never start moving at all. Aristotle answers that time can be divided just as infinitely as space, so that it would take infinitely little time to cover the infinitely little space needed to get started. The second paradox is called the Achilles paradox: supposing Achilles is racing a tortoise and gives the tortoise a head start. Then by the time Achilles reaches the point the tortoise started from, the tortoise will have advanced a certain distance, and by the point Achilles advances that certain distance, the tortoise will have advanced a bit farther, and so on, so that it seems Achilles will never be able to catch up with, let alone pass, the tortoise. Aristotle responds that the paradox assumes the existence of an actual infinity of points between Achilles and the tortoise. If there were an actual infinity—that is, if Achilles had to take account of all the infinite points he passed in catching up with the tortoise—it would indeed take an infinite amount of time for Achilles to pass the tortoise. However, there is only a potential infinity of points between Achilles and the tortoise, meaning that Achilles can cover the infinitely many points between him and the tortoise in a finite amount of time so long as he does not take account of each point along the way. The third and fourth paradoxes, called the arrow paradox and the stadium paradox, respectively, are more obscure, but they seem to aim at proving that time and space cannot be divided into atoms. This is a position that Aristotle already agrees with, so he takes less trouble over these paradoxes. Aristotle argues that change is eternal because there cannot be a first cause of change without assuming that that cause was itself uncaused. Living things can cause change without something external acting on them, but the source of this change is internal thoughts and desires, and these thoughts and desires are provoked by external stimuli. Arguing that time is infinite, Aristotle reasons that there cannot be a last cause, since time cannot exist without change. Next, Aristotle argues that everything that changes is changed by something external to itself. Even changes within a single animal consist of one part of the animal changing another part. Aristotle’s reflections on cause and change lead him ultimately to posit the existence of a divine unmoved mover. If we were to follow a series of causes to its source, we would find a first cause that is either an unchanged changer or a self-changing changer. Animals are the best examples of self-changers, but they constantly come into being and pass away. If there is an eternal succession of causes, there needs to be a first cause that is also eternal, so it cannot be a self-changing animal. Since change is eternal, there must be a single cause of change that is itself eternal and continuous. The primary kind of change is movement and the primary kind of movement is circular, so this first cause must cause circular movement. This circular movement is the movement of the heavens, and it is caused by some first cause of infinite power that is above the material world. The circular movement of the heavens is then in turn the cause of all other change in the sublunary world. Metaphysics: Books Alpha-Epsilon Knowledge consists of particular truths that we learn through experience and the general truths of art and science. Wisdom consists in understanding the most general truths of all, which are the fundamental principles and causes that govern everything. Philosophy provides the deepest understanding of the world and of divinity by pursuing the sense of wonder we feel toward reality. There are four kinds of cause, or rather kinds of explanation, for how things are: (1) the material cause, which explains what a thing is made of; (2) the formal cause, which explains the form a thing assumes; (3) the efficient cause, which explains the process by which it came into being; and (4) the final cause, which explains the end or purpose it serves. The explanations of earlier philosophers have conformed to these four causes but not as coherently and systematically as Aristotle’s formulation. Aristotle acknowledges that Plato’s Theory of Forms gives a strong account of the formal cause, but it fails to prove that Forms exist and to explain how objects in the physical world participate in Forms. Book Alpha the Lesser addresses some questions of method. Though we all have a natural aptitude for thinking philosophically, it is very difficult to philosophize well. The particular method of study depends on the subject being studied and the inclinations of the students. The important thing is to have a firm grasp of method before proceeding, whatever the method. The best method is that of mathematics, but this method is not suitable for subjects where the objects of study are prone to change, as in science. Most reasoning involves causal chains, where we investigate a phenomenon by studying its causes, and then the cause of those causes, and so on. This method would be unworkable if there were infinitely long causal chains, but all causal chains are finite, meaning that there must be an uncaused first cause to every chain. Book Beta consists of a series of fifteen metaphysical puzzles on the nature of first principles, substance, and other fundamental concepts. In each case, Aristotle presents a thesis and a contradicting antithesis, both of which could be taken as answers to the puzzle. Aristotle himself provides no answers to the puzzles but rather takes them as examples of extreme positions between which he will try to mediate throughout the rest of the Metaphysics. Book Gamma asserts that philosophy, especially metaphysics, is the study of being qua being. That is, while other sciences investigate limited aspects of being, metaphysics investigates being itself. The study of being qua being amounts to the search into first principles and causes. Being itself is primarily identified with the idea of substance, but also with unity, plurality, and a variety of other concepts. Philosophy is also concerned with logic and the principles of demonstration, which are supremely general, and hence concerned with being itself. The most fundamental principle is the principle of noncontradiction: nothing can both be something and not be that same something. Aristotle defends this principle by arguing that it is impossible to contradict it coherently. Connected to the principle of non-contradiction is the principle of the excluded middle, which states that there is no middle position between two contradictory positions. That is, a thing is either x or not-x, and there is no third possibility. Book Gamma concludes with an attack on several general claims of earlier philosophers: that everything is true, that everything is false, that everything is at rest, and that everything is in motion. Book Delta consists of the definitions of about forty terms, some of which feature prominently in the rest of the Metaphysics, such as principle, cause, nature, being, and substance. The definitions specify precisely how Aristotle uses these terms and often distinguish between different uses or categories of the terms. Book Epsilon opens by distinguishing philosophy from the sciences not just on the basis of its generality but also because philosophy, unlike the sciences, takes itself as a subject of inquiry. The sciences can be divided into practical, productive, and theoretical. The theoretical sciences can be divided further into physics, mathematics, and theology, or first philosophy, which studies first principles and causes. We can look at being in four different ways: accidental being, being as truth, the category of being, and being in actuality and potentiality. Aristotle considers the first two in book Epsilon and examines the category of being, or substance, in books Zeta and Eta, and being in actuality and potentiality in book Theta. Accidental being covers the kinds of properties that are not essential to a thing described. For example, if a man is musical, his musicality is accidental since being musical does not define him as a man and he would still be a man even if he were not musical. Accidental being must have a kind of accidental causation, which we might associate with chance. That is, there is no necessary reason why a musical man is musical, but rather it just so happens by chance that he is musical. Being as truth covers judgments that a given proposition is true. These sorts of judgments involve mental acts, so being as truth is an affection of the mind and not a kind of being in the world. Because accidental being is random and being as truth is only mental, they fall outside the realm of philosophy, which deals with more fundamental kinds of being. Metaphysics: Books Zeta-Eta Summary Referring back to his logical work in the Categories, Aristotle opens book Zeta by asserting that substance is the primary category of being. Instead of considering what being is, we can consider what substance is. Aristotle first rejects the idea that substance is the ultimate substrate of a thing, that which remains when all its accidental properties are stripped away. For example, a dog is more fundamental than the color brown or the property of hairiness that are associated with it. However, if we strip away all the properties that a dog possesses, we wind up with a substrate with no properties of its own. Since this substrate has no properties, we can say nothing about it, so this substrate cannot be substance. Instead, Aristotle suggests that we consider substance as essence and concludes that substances are species. The essence of a thing is that which makes it that thing. For example, being rational is an essential property of being human, because a human without rationality ceases to be human, but being musical is not an essential property of being human, because a human without musical skill is still human. Individual people, or dogs, or tables, contain a mixture of essential and inessential properties. Species, on the other hand—for instance, people in general, dogs in general, or tables in general—contain only essential properties. A substance can be given a definition that does not presuppose the existence of anything else. A snub, for example, is not a substance, because we would define a snub as â€Å"a concave nose,† so our definition of snub presupposes the existence of noses. A proper definition of a thing will list only its essential properties, and Aristotle asserts that only substances have essential properties or definitions. A snub nose, by contrast, has only accidental properties—properties like redness or largeness that may hold of some snubs but not of all—and per se properties—properties like concavity, which necessarily holds of all snubs but which is not essential. Physical objects are composites of form and matter, and Aristotle identifies substance with form. The matter of an object is the stuff that makes it up, whereas the form is the shape that stuff takes. For example, the matter in a bronze sphere is the bronze itself, and the form is the spherical shape. Aristotle argues that form is primary because form is what gives each thing its distinctive nature. Aristotle has argued that the definitions of substances cannot presuppose the existence of anything else, which raises the question of how there can be a definition that does not presuppose the existence of anything else. Presumably, a definition divides a whole into its constituent parts—for example, a human is defined as a rational animal—which suggests that a substance must in some way presuppose the existence of its constituent parts. Aristotle distinguishes between those cases where the parts of an object or definition are prior to the whole and those cases where the whole is prior to the parts. For example, we cannot understand the parts of a circle without first understanding the concept of circle as a whole; on the other hand, we cannot understand the whole of a syllable before we understand the letters that constitute its parts. Aristotle argues that, in the case of substance, the whole is prior to the parts. He has earlier associated substance with form and suggests that we cannot make sense of matter before we can conceive of its form. To say a substance can be divided by its definition is like saying a physical object can be divided into form and matter: this conceptual distinction is possible, but form and matter constitute an indivisible whole, and neither can exist without the other. Similarly, the parts of a definition of a substance are conceptually distinct, but they can only exist when they are joined in a substance. Having identified substance with essence, Aristotle attacks the view that substances are universals. This attack becomes effectively an attack on Plato’s Theory of Forms, and Aristotle argues forcefully that universal Forms cannot exist prior to the individual instances of them or be properly defined and so cannot play any role in science, let alone a fundamental role. He also argues against the suggestion that substances can be genus categories, like â€Å"animal† or â€Å"plant. † Humans and horses, unlike animals, have the property of â€Å"thisness†: the words human and horse pick out a particular kind of thing, whereas nothing particular is picked out by animal. Genuses are thus not specific enough to qualify as substances. Book Eta contains a number of loosely connected points elaborating Aristotle’s views on substance. Aristotle associates an object’s matter with its potentiality and its form with its actuality. That is, matter is potentially a certain kind of substance and becomes that substance in actuality when it takes on the form of that substance. By associating substance with form and actuality, Aristotle infers a further connection between substance and differentia: differentia are those qualities that distinguish one species in a genus from another. Book Eta also contains reflections on the nature of names, matter, number, and definition. Metaphysics: Books Theta-Nu Summary Book Theta discusses potentiality and actuality, considering these concepts first in regard to process or change. When one thing, F, changes into another, G, we can say that F is G in potentiality, while G is G in actuality. F changes into G only if some other agent, H, acts on it. We say that H has active potentiality and F has passive potentiality. Potentiality can be either rational or irrational, depending on whether the change is effected by a rational agent or happens naturally. Aristotle distinguishes rational potentiality from irrational potentiality, saying that rational potentiality can produce opposites. For example, the rational potentiality of medicine can produce either health or sickness, whereas the irrational potentiality of heating can produce only heat and not cold. All potentialities must eventually be realized: if a potentiality never becomes an actuality, then we do not call it a potentiality but an impossibility. A potentiality is also determinate, meaning that it is the potential for a particular actuality and cannot realize some other actuality. While irrational potentialities are automatically triggered when active and passive potentialities come together, this is not the case with rational potentialities, as a rational agent can choose to withhold the realization of the potentiality even though it can be realized. Aristotle identifies actuality with form, and hence substance, while identifying matter with potentiality. An uncarved piece of wood, for example, is a potential statue, and it becomes an actual statue when it is carved and thus acquires the form of a statue. Action is an actuality, but there are such things as incomplete actions, which are also the potentiality for further actions.

Wednesday, January 1, 2020

Post Colonial Laws On Natives Rights Folly Or Fair Play

Post Colonial Laws on Natives’ Rights: Folly or Fair Play? Every ethnic group, in addition to possessing their own individual identity, holds the sense of who they are in relation to a larger spectrum, the world. But post colonialism strips away that traditional perspective and examines the dynamic between the aristocratic superpower and the subdued and dejected local inhabitants. This dynamic not only includes the effects of direct colonialism from the colonizers, but the post occupational ramifications on the colonized. (Dobie 208-209) The relationship between the colonizers and the colonized is mainly formed from a forced encounter of violence. The colonizer and pre colonized face off in numerous conflicts and skirmishes to decide the fate of the destiny. After which the victor (superpower) enforces strict laws and culture onto the thwarted colonized.The colonizers reign usually last for a long time, giving partial sovereignty to the colonized, who become the subaltern and accept their position by adopt the colonizer’s culture and laws to survive. This type of dynamic can be seen in Louise Erdrich’s The Round House, where the effects of post colonialism take a toll on the former colonized, causing â€Å"ideal justice† and the â€Å"best-we-can-do justice†to fall short on their principles when a Native American woman is raped by a white man. Erdrich presents the life on the Native American reservation in a sense of post cultural civilization. The reservation is a civilized areaShow MoreRelatedOne Significant Change That Has Occurred in the World Between 1900 and 2005. Explain the Impact This Change Has Made on Our Lives and Why It Is an Important Change.163893 Words   |  656 PagesPolitics of Law Enforcement and the LAPD Allen Hunter, ed., Rethinking the Cold War Eric Foner, ed., The New American History. Revised and Expanded Edition E SSAYS ON _ T WENTIETH- C ENTURY H ISTORY Edited by Michael Adas for the American Historical Association TEMPLE UNIVERSITY PRESS PHILADELPHIA Temple University Press 1601 North Broad Street Philadelphia, Pennsylvania 19122 www.temple.edu/tempress Copyright  © 2010 by Temple University All rights reserved PublishedRead MoreGeorge Orwell23689 Words   |  95 Pagescertain alternatives are possible and others not. A seed may grow or not grow, but at any rate a turnip seed never grows into a parsnip. It is therefore of the deepest importance to try and determine what England is, before guessing what part England can play in the huge events that are happening. II National characteristics are not easy to pin down, and when pinned down they often turn out to be trivialities or seem to have no connexion with one another. Spaniards are cruel to animals, Italians canRead MoreGp Essay Mainpoints24643 Words   |  99 PagesResponsibility of Media j. Media ethics k. New Media and Democracy 2. Science/Tech a. Science and Ethics b. Government and scientist role in science c. Rely too much on technology? d. Nuclear technology e. Genetic modification f. Right tech for wrong reasons 3. Arts/Culture a. Arts have a future in Singapore? b. Why pursue Arts? c. Arts and technology d. Uniquely Singapore: Culture 4. Environment a. Developed vs. Developing b. Should environment be saved at all