DeletedUser
Hey there, I've missed chatting with you all so I thought I'd come up with a (hopefully) good topic. I may/may not reply promptly given that I don't play six hours a day anymore, but anyways, on to the show.
Technological advancement is usually an enabler of higher utility, such as the Agricultural Revolution which allowed people to have a stable food supply and thus bring about the birth of modern civilization. Higher output and lower costs would seem to be a motivation for continued investment in technological advancement. However, there are some cases where this would not apply:
1. Things are 'good enough' - everything works, money is in the coffers, everyone is happy, and people get lazy.
2. Radical changes are discouraged - the stability brought on by case 1 compels people to maintain the status quo and prolong the stability, producing stagnation.
3. Once power is gained, it is seldom relinquished willingly - often, those in power benefit the most from stability, as their length in time of power is proportional to stability.
4. The initial costs for changes seem too high - people become lax and unwilling to work meaningfully towards new horizons.
So, we can see that at times, technological advancement is at odds with utility, economic and otherwise. To make things concrete, an example of the above would be the area of computing. Nowadays, processors are not significantly faster than their predecessors, and there is also the looming shadow of Moore's law. All of the excitement of significant technological advances is mostly in the university and government labs, while consumers would jump for joy at the latest iPad or whatever gadget which only shows marginal improvements in computational power. Indeed, the abundance of apps keep people distracted. Anyone heard of memristers, graphene, or qubits?
Some questions to ponder: Is all/some/no technological advancement worth it at the cost of utility? How can significant change be brought about in spite of the cases posed above? What are the implications?
Technological advancement is usually an enabler of higher utility, such as the Agricultural Revolution which allowed people to have a stable food supply and thus bring about the birth of modern civilization. Higher output and lower costs would seem to be a motivation for continued investment in technological advancement. However, there are some cases where this would not apply:
1. Things are 'good enough' - everything works, money is in the coffers, everyone is happy, and people get lazy.
2. Radical changes are discouraged - the stability brought on by case 1 compels people to maintain the status quo and prolong the stability, producing stagnation.
3. Once power is gained, it is seldom relinquished willingly - often, those in power benefit the most from stability, as their length in time of power is proportional to stability.
4. The initial costs for changes seem too high - people become lax and unwilling to work meaningfully towards new horizons.
So, we can see that at times, technological advancement is at odds with utility, economic and otherwise. To make things concrete, an example of the above would be the area of computing. Nowadays, processors are not significantly faster than their predecessors, and there is also the looming shadow of Moore's law. All of the excitement of significant technological advances is mostly in the university and government labs, while consumers would jump for joy at the latest iPad or whatever gadget which only shows marginal improvements in computational power. Indeed, the abundance of apps keep people distracted. Anyone heard of memristers, graphene, or qubits?
Some questions to ponder: Is all/some/no technological advancement worth it at the cost of utility? How can significant change be brought about in spite of the cases posed above? What are the implications?