Welcome, Guest |
You have to register before you can post on our site.
|
Online Users |
There are currently 236 online users. » 0 Member(s) | 234 Guest(s) Bing, Google
|
Latest Threads |
Bootverkopers - actieve j...
Forum: Off Topic
Last Post: axied12
12-07-2024, 11:07 AM
» Replies: 0
» Views: 103
|
Services for individuals ...
Forum: Off Topic
Last Post: axied12
12-02-2024, 03:31 PM
» Replies: 0
» Views: 126
|
Used excavators
Forum: Off Topic
Last Post: axied12
11-13-2024, 09:22 AM
» Replies: 0
» Views: 162
|
Buy Higo Rocket in Dubai
Forum: Off Topic
Last Post: axied12
10-25-2024, 01:09 PM
» Replies: 0
» Views: 217
|
Anime coloring pages
Forum: Off Topic
Last Post: axied12
10-13-2024, 05:10 PM
» Replies: 0
» Views: 265
|
Buy modest prom dresses
Forum: Off Topic
Last Post: axied12
10-07-2024, 06:03 PM
» Replies: 0
» Views: 300
|
Caviar subscription
Forum: Off Topic
Last Post: axied12
10-02-2024, 11:45 AM
» Replies: 0
» Views: 297
|
Programs for individuals ...
Forum: Off Topic
Last Post: axied12
09-23-2024, 08:46 AM
» Replies: 0
» Views: 276
|
Car detailing
Forum: Off Topic
Last Post: axied12
09-03-2024, 08:49 AM
» Replies: 0
» Views: 392
|
Buy self-aligning bearing...
Forum: Off Topic
Last Post: axied12
09-03-2024, 08:11 AM
» Replies: 0
» Views: 376
|
|
|
The Jacquard Loom: A Driver of the Industrial |
Posted by: nsj5521sw - 08-24-2021, 03:11 AM - Forum: Welcomes and Introductions
- No Replies
|
|
THE INSTITUTEThis month The Institute is focusing on how technology is transforming the garment industry. The electronic Jacquard loom was the first loom that automatically created complex textile patterns. This led to the mass production of cloth with intricate designs.
Joseph Marie Charles Jacquard of France was born into a family of weavers in 1752. He received no formal schooling but tinkered with ways to improve the mechanical textile looms of the day.
At that time, two people were needed on each loom. A skilled weaver and an assistant, or draw boy, chose by hand which warps (the lengthwise threads held under tension on the loom) to pull up so the weft (the thread inserted at right angles) could be pulled through the warps to create a pattern.
At an industrial exhibition in Paris in 1801, Jacquard demonstrated something truly remarkable: a loom in which a series of cards with punched holes (one card for each row of the design) automatically created complex textile patterns. The draw boy was no longer needed. Patterns that had been painstaking to produce and prone to error could now be mass-produced quickly and flawlessly, once programmed and punched on the cards.
The government of France soon nationalized the loom (or considered it government property) and compensated Jacquard with a pension to support him while he continued to innovate. He also was paid a royalty for each machine sold. It took Jacquard several more years to perfect the device and make it commercially successful.
The social and psychological impact of a machine that could replace human labor was immense.
HOW IT WORKED
Jacquard did not invent a whole new loom but a head that attaches to the loom and allows the weaving machine to create intricate patterns. Thus, any loom that uses the attachment is called a Jacquard loom.
The state-of the art loom at that time was one in which the harnesses holding the threads were raised or lowered by foot pedals on a treadle, leaving the weaver free to operate the machine with his hands. The Jacquard loom, in contrast, was controlled by a chain of punch cards laced together in a sequence. Multiple rows of holes were punched on each card, with one complete card corresponding to one row of the design. Chains of cards allowed sequences of any length to be constructed, not limited by the cards’ size.
Each hole position in the card corresponded to a hook, which could either be raised or lowered depending on whether the hole was punched. The hook raised or lowered the harness that carried and guided the thread. The sequence of raised and lowered threads created the pattern. A hook could be attached to a number of threads to create a continuous, intricate design.Already in the late 18th century, workers throughout Europe were upset with the increasing mechanization of their trades. Jacquard’s loom was fiercely opposed by silk-weavers in Paris who rightly saw it would put many of them out of work. In England, where an anti-industry workers movement was already well developed, news of the Jacquard loom fostered momentum for the Luddite movement, whose textile workers protested the new technology. Although the French looms did not arrive in England until the early 1820s, news of their existence helped intensify violent protests. People smashed the machines and killed textile mill owners; the authorities violently suppressed the protests. To this day, people who resist new technology are called Luddites.
But the high speed electronic Jacquard loom was too good to be ignored. Ultimately, it became standard throughout the industrializing world for weaving luxury fabrics, replaced by the dobby loom in the 1840s. In a dobby, a chain of bars with pegs, rather than foot pedals, is used to select and move the harness. Even then, parts of Jacquard’s control system could be adapted to the dobby loom.Perhaps what is most interesting about the Jacquard loom was its afterlife. When computer pioneer Charles Babbage, a British mathematician, envisioned an “analytical engine” in 1837 that would essentially become the first general-purpose computer, he decided that the computer’s input would be stored on punch cards, modeled after Jacquard’s system. Although Babbage never built his engine, he and his work were well known to the mathematics community and eventually influenced the field that came to be computer science.THE INSTITUTEThis month The Institute is focusing on how technology is transforming the garment industry. The Jacquard Loom was the first loom that automatically created complex textile patterns. This led to the mass production of cloth with intricate designs.
Joseph Marie Charles Jacquard of France was born into a family of weavers in 1752. He received no formal schooling but tinkered with ways to improve the mechanical textile looms of the day.
At that time, two people were needed on each loom. A skilled weaver and an assistant, or draw boy, chose by hand which warps (the lengthwise threads held under tension on the loom) to pull up so the weft (the thread inserted at right angles) could be pulled through the warps to create a pattern.
At an industrial exhibition in Paris in 1801, Jacquard demonstrated something truly remarkable: a loom in which a series of cards with punched holes (one card for each row of the design) automatically created complex textile patterns. The draw boy was no longer needed. Patterns that had been painstaking to produce and prone to error could now be mass-produced quickly and flawlessly, once programmed and punched on the cards.
The government of France soon nationalized the loom (or considered it government property) and compensated Jacquard with a pension to support him while he continued to innovate. He also was paid a royalty for each machine sold. It took Jacquard several more years to perfect the device and make it commercially successful.
The social and psychological impact of a machine that could replace human labor was immense.
HOW IT WORKED
Jacquard did not invent a whole new loom but a head that attaches to the loom and allows the weaving machine to create intricate patterns. Thus, any loom that uses the attachment is called a Jacquard loom.
The state-of the art loom at that time was one in which the harnesses holding the threads were raised or lowered by foot pedals on a treadle, leaving the weaver free to operate the machine with his hands. The Jacquard loom, in contrast, was controlled by a chain of punch cards laced together in a sequence. Multiple rows of holes were punched on each card, with one complete card corresponding to one row of the design. Chains of cards allowed sequences of any length to be constructed, not limited by the cards’ size.
Each hole position in the card corresponded to a hook, which could either be raised or lowered depending on whether the hole was punched. The hook raised or lowered the harness that carried and guided the thread. The sequence of raised and lowered threads created the pattern. A hook could be attached to a number of threads to create a continuous, intricate design.
Herman Hollerith\u2019s punched-card computer, invented in the early 1880s, was inspired by the Jacquard loomHerman Hollerith’s punched-card computer, invented in the early 1880s, was inspired by the Jacquard loom PHOTO: HULTON ARCHIVE/GETTY IMAGES
FIERCE OPPOSITION
Already in the late 18th century, workers throughout Europe were upset with the increasing mechanization of their trades. Jacquard’s loom was fiercely opposed by silk-weavers in Paris who rightly saw it would put many of them out of work. In England, where an anti-industry workers movement was already well developed, news of the high speed electronic Jacquard loom for weaving machine fostered momentum for the Luddite movement, whose textile workers protested the new technology. Although the French looms did not arrive in England until the early 1820s, news of their existence helped intensify violent protests. People smashed the machines and killed textile mill owners; the authorities violently suppressed the protests. To this day, people who resist new technology are called Luddites.
But the Jacquard loom was too good to be ignored. Ultimately, it became standard throughout the industrializing world for weaving luxury fabrics, replaced by the dobby loom in the 1840s. In a dobby, a chain of bars with pegs, rather than foot pedals, is used to select and move the harness. Even then, parts of Jacquard’s control system could be adapted to the dobby loom.
A LONG LEGACY
Perhaps what is most interesting about the Jacquard loom was its afterlife. When computer pioneer Charles Babbage, a British mathematician, envisioned an “analytical engine” in 1837 that would essentially become the first general-purpose computer, he decided that the computer’s input would be stored on punch cards, modeled after Jacquard’s system. Although Babbage never built his engine, he and his work were well known to the mathematics community and eventually influenced the field that came to be computer science.
In the mid-1880s, the U.S. Census Bureau began to experiment with ways to automate the way it was assessing the population of the United States and processing the answers to the questions survey takers asked each household. The data from the 1880 census was overwhelming; it took eight years to compile and process. Engineer Herman Hollerith, who was on the bureau’s technical staff, felt he could improve the process. He got busy and, in 1884, filed a patent for an electromechanical device that rapidly read information encoded by punching holes on a paper tape or a set of cards. In 1889 Hollerith’s newly formed Tabulating Machine Co. was chosen to process the 1890 census. The company was decidedly successful; data from the 1890 census was compiled in only one year. The 1890 population of the United States was put at 62,947,714 people.
Apparently, Hollerith based his concept on the electronic Jacquard loom machine. Historians disagree, however, as to whether he also was influenced by Babbage’s work.
The Tabulating Machine Co. eventually became IBM. (Some IEEE members undoubtedly remember using IBM punch cards into the 1970s.)
Thus, the computer industry—which became a field of cutting-edge innovation—was affected by at least two streams of influence from the Jacquard loom. It is only fitting and fair that computing is now generating innovation in the textile industry with such creations as wearables, 3-D printed clothing, and digital industrial knitting machines. Before even the telegraph, innovation in textile technology was one of the “engines” (along with steam power and iron production) that drove the Industrial Revolution.
When Joseph-Marie Jacquard, a French weaver and merchant, patented his invention in 1804, he revolutionised how patterned cloth could be woven. His Jacquard machine, which built on earlier developments by inventor Jacques de Vaucanson, made it possible for complex and detailed patterns to be manufactured by unskilled workers in a fraction of the time it took a master weaver and his assistant working manually.
The spread of Jacquard's invention caused the cost of fashionable, highly sought-after patterned cloth to plummet. It could now be mass produced, becoming affordable to a wide market of consumers, not only the wealthiest in society.
To weave fabric on a loom, a thread (called the weft) is passed over and under a set of threads (called the warp). It is this interlacing of threads at right angles to each other that forms cloth. The particular order in which the weft passes over and under the warp threads determines the pattern that is woven into the fabric.
Before the Jacquard system, a weaver's assistant (known as a draw boy) had to sit atop a loom and manually raise and lower its warp threads to create patterned cloth. This was a slow and laborious process.
The key to the success of Jacquard's invention was its use of interchangeable cards, upon which small holes were punched, which held instructions for weaving a pattern. This innovation effectively took over the time-consuming job of the draw boy.
When fed into the Jacquard mechanism (fitted to the top of the loom), the cards controlled which warp threads should be raised to allow the weft thread to pass under them. With these punch cards, Jacquard looms could quickly reproduce any pattern a designer could think up, and replicate it again and again.
First, a designer paints their pattern onto squared paper. A card maker then translates the pattern row by row onto punch cards. For each square on the paper that has not been painted in, the card maker punches a hole in the card. For each painted square, no hole is punched.
The cards, each with their own combination of punched holes corresponding to the part of the pattern they represent, are then laced together, ready to be fed one by one through the Jacquard mechanism fitted at the top of the loom. When a card is pushed towards a matrix of pins in the Jacquard mechanism, the pins pass through the punched holes, and hooks are activated to raise their warp threads. Where there are no holes the pins press against the card, stopping the corresponding hooks from raising their threads.
A shuttle then travels across the loom, carrying the weft thread under the warp threads that have been raised and over those that have not. This repeating process causes the loom to produce the patterned cloth that the punch cards have instructed it to create.Manchester engineering companies also began manufacturing Jacquard machinery to supply to the region's textile mills. Devoge and Co. was established in 1834 and continued producing Jacquard mechanisms until the 1980s.
Jacquard's invention transformed patterned cloth production, but it also represented a revolution in human-machine interaction in its use of binary code—either punched hole or no punched hole—to instruct a machine (the loom) to carry out an automated process (weaving).
The Jacquard needle loom machine is often considered a predecessor to modern computing because its interchangeable punch cards inspired the design of early computers.
With his Analytical Engine, Babbage envisaged a machine that could receive instructions from punch cards to carry out mathematical calculations. His idea was that the punch cards would feed numbers, and instructions about what to do with those numbers, into the machine.Ada Lovelace took Babbage's idea a step further, proposing that the numbers the engine manipulated could represent not just quantities, but any data. She saw the potential for computers to be used beyond mathematical calculation and proposed the idea of what we now know as computer programming.
Unfortunately, the Analytical Engine was never completed, and it was 100 years before Babbage's and Lovelace's predictions were realised.
However, their work, and the inspiration provided by Jacquard's revolutionary weaving machine, came to underpin the technological development of the modern computer.
|
|
|
How Toner Cartridges Work? |
Posted by: nsj5521sw - 08-24-2021, 03:08 AM - Forum: Welcomes and Introductions
- No Replies
|
|
What do printers do? Well, they make paper copies of what's on your screen. But contrary to what you may think, modern LaserJet toner cartridges don't print using ink. So then how do LaserJet toner cartridges work?
Here's everything you need to know about LaserJet printers, toner cartridges, and which ones are the best to buy.
One of the interesting aspects of laser printers and copiers is the toner.
Rather than the printer applying ink, the paper actually “grabs” the toner.
The toner itself is not ink, but rather an electrically-charged powder made of plastic and pigment.
A LaserJet printer consists of several components. Let's start with the photoreceptor drum assembly, a revolving cylinder made of photoconductive material.
Printers beam a laser beam across the surface of this revolving drum. The drum has a positive charge, but the laser discharges the points it comes in contact with, leaving the resulting image with a negative charge (or vice versa). In this way, the laser draws the document or image you wish to print.The printer then coats the drum not with ink, but with powder. This powder sticks to the electrostatic image the laser has drawn. The powder consists of two ingredients: pigment and plastic. Pigment provides the color, while the plastic is there to adhere the pigment to paper. This mixture, known as toner, is spun in a component called the hopper.
The printer then feeds paper under the drum, first giving the paper a stronger negative charge than that of the electrostatic image. This enables the paper to pull the powder away from the drum.
The paper then passes through a pair of heated rollers referred to as the fuser. As it does, the plastic particles melt and blend with the paper. This process allows the powder to adhere to more types of paper than conventional ink, as long as they can handle the fuser's heat.
This is also why paper is hot when it first comes out of a laser printer.
Toner cartridges may largely do the same task, but they're not all the same. When planned obsolescence kicks in and the time comes to invest in a new one, you want to make sure you're buying a quality product.
To save money and walk away with the kind of experience you want, here are some questions to keep in mind while shopping:
Does the cartridge work in your printer? If you're buying a new cartridge, this is as simple as matching brand and model numbers. But if you're looking at third-party options, you may have to do more research. Even if a cartridge theoretically works with your printer, differences in toner powder or other components can result in damage. Triple-check reviews and whatever other information you can get your hands on.
How much does it cost to print a page? Toner cartridges can be expensive, sometimes more expensive than the cost of the printer itself. When comparing price, look at the cost per page, rather than the total cost of the cartridge. This gives you a more accurate read on whether one cartridge is truly more affordable than another.
How many pages can you print? Toner cartridges may be expensive, but you're getting a lot of pages for your buck. The average compatible toner cartridge for kyocera lasts over 1,500 pages. Some print more, and some print less. How many pages is an acceptable number to you?
Can you recycle this cartridge? Some LaserJet toner cartridge manufacturers provide their own recycling programs. Various department stores also perform this service. See which options are available in your area, and which brands are supported.
Manufacturers test and design new cartridges specifically for your machine. Refilling a cartridge adds variability to the process. Is it guaranteed to break your printer? Not at all. But you are exposing yourself to that risk. Though if you're used to buying used products, you may already be comfortable with such a gamble.
Unfortunately, you may not even have the option. Like inkjet printers, some LaserJet toner cartridges now contain chips that communicate when a cartridge is empty. You can refill the product, but without the ability to reset the chip, the printer will still think there's nothing there.
You may also notice a difference in print quality. A refilled cartridge might not give you the kind of crisp prints you expect. You may also find that you're not getting as many prints as you were before.
How does toner work?
The two ingredients of toner, plastic and pigment, each have a simple role in the printing process.
The pigment provides the color, while the plastic allows the pigment to stick to the paper when the plastic is heated and melts.
The melting process gives laser toner an advantage over ink, in that it binds firmly to the paper fibers, resisting smudges and bleeding.
This also provides an even, vivid tone that helps text appear sharp on paper.
Another advantage of toner is the cost. Offices usually choose laser printers because the cost of replacing the toner cartridges is less than inkjet printer cartridges, and laser printers tend to cost only slightly more than inkjet printers.
Anatomy of a toner cartridge
The design of a compatible toner cartridge for ricoh varies with different models and manufacturers, but the following components are commonly found in most toner cartridges.
Toner hopper:The small container which houses the toner
Seal:A removable strip that prevents toner from spilling before installation
Doctor blade: Helps control the precise amount of toner that is distributed to the developer
Developer:Transfers toner to the OPC drum
Waste bin:Collects residual toner wiped from the OPC drum
Wiper blade:Wipes away residual toner applied to the page
Primary charge roller (PCR):Applies a uniform negative to the OPC drum prior to laser-writing. It also erases the laser image
Organic photo-conductor (OPC) drum:holds an electrostatic image and transfers toner onto the paper
Drum shutter:protects the drum from light when outside the machine and retracts the drum into the printer
How does the cartridge work?
In most cartridges, the toner hopper, developer and drum assembly are all part of the replaceable cartridge unit.
When an image or text is being printed on paper, the printer gathers toner from the hopper with the developer.
The developer, composed of negatively-charged magnetic beads attached to a metal roller, moves through the hopper gathering toner.
The developer collects positively-charged toner particles and brushes them past the drum assembly.
The electrostatic image on the drum has a stronger negative charge than the beads on the developer, so the toner is pulled from the developer onto the drum.
Next, the drum moves over the paper. The paper has an even stronger negative charge than the drum, and pulls the toner particles off of the drum in the shape of the electrostatic image.
Next, the paper is discharged by the detac corona wire.
At this point, gravity is the only thing keeping the toner in place. In order to affix the toner, the paper needs to pass through the fuser rollers, which are heated by internal quartz tube lamps.
The heat melts the plastic in the toner particles, causing the toner to be absorbed into the paper fibers.
Although the melted plastic sticks to the paper, it does not adhere to the heated fuser rollers.
This is possible because the rollers are coated with Teflon, the same material that helps food slide out of non-stick frying pans.
Color vs. Monochrome Printing
Color toner works essentially the same way as monochrome toner, except the process is repeated for each of the toner colors.
The standard toner colors are cyan (blue), magenta (red), yellow and black. The black is needed because the three primary colors (red, yellow and blue) can be combined to form any color except black.
The reason for this is black is not technically a color, but the complete absence of color.
These four toner colors, when combined at varying levels of saturation and lightness, can produce millions of different shades and hues.
This quick guided tour of toner cartridges should help provide a basic understanding of how they work.
The current technology of compatible toner cartridge for canon has allowed laser printers to dominate the office printing market.
In the years to come, new designs of toner cartridges promise to provide more efficient and cost-effective solutions for office and home printing.
Eco-Friendly Toner Cartridges are range of remanufactured compatible toner cartridge for konica minolta selectively tested and produces locally. Majority of the used empties are collected locally to reduces local landfill. Singapore is a small city every inch counts, therefore every single cartridge we recycled helps.
"Check out OUR RANGE OF ECO-FRIENLDY CARTRIDGE"
More than just a business that wants to profit and make money, Green Cartridges Pte Ltd takes it corporate social responsibility seriously. We choose to actively take part in foresting environmental awareness by making printer owners realize the need to reuse and recycle cartridges to minimize waste pollution.
|
|
|
How Music and Instruments Began? |
Posted by: nsj5521sw - 08-24-2021, 03:05 AM - Forum: Welcomes and Introductions
- No Replies
|
|
Music must first be defined and distinguished from speech, and from animal and bird cries. We discuss the stages of hominid anatomy that permit music to be perceived and created, with the likelihood of both Homo neanderthalensis and Homo sapiens both being capable. The earlier hominid ability to emit sounds of variable pitch with some meaning shows that music at its simplest level must have predated speech. The possibilities of anthropoid motor impulse suggest that rhythm may have preceded melody, though full control of rhythm may well not have come any earlier than the perception of music above. There are four evident purposes for music: dance, ritual, entertainment personal, and communal, and above all social cohesion, again on both personal and communal levels. We then proceed to how outdoor musical instrument began, with a brief survey of the surviving examples from the Mousterian period onward, including the possible Neanderthal evidence and the extent to which they showed “artistic” potential in other fields. We warn that our performance on replicas of surviving instruments may bear little or no resemblance to that of the original players. We continue with how later instruments, strings, and skin-drums began and developed into instruments we know in worldwide cultures today. The sound of music is then discussed, scales and intervals, and the lack of any consistency of consonant tonality around the world. This is followed by iconographic evidence of the instruments of later antiquity into the European Middle Ages, and finally, the history of public performance, again from the possibilities of early humanity into more modern times. This paper draws the ethnomusicological perspective on the entire development of music, instruments, and performance, from the times of H. neanderthalensis and H. sapiens into those of modern musical history, and it is written with the deliberate intention of informing readers who are without special education in music, and providing necessary information for inquiries into the origin of music by cognitive scientists.
But even those elementary questions are a step too far, because first we have to ask “What is music?” and this is a question that is almost impossible to answer. Your idea of music may be very different from mine, and our next-door neighbor’s will almost certainly be different again. Each of us can only answer for ourselves.
Mine is that it is “Sound that conveys emotion.”
We can probably most of us agree that it is sound; yes, silence is a part of that sound, but can there be any music without sound of some sort? For me, that sound has to do something—it cannot just be random noises meaning nothing. There must be some purpose to it, so I use the phrase “that conveys emotion.” What that emotion may be is largely irrelevant to the definition; there is an infinite range of possibilities. An obvious one is pleasure. But equally another could be fear or revulsion.
How do we distinguish that sound from speech, for speech can also convey emotion? It would seem that musical sound must have some sort of controlled variation of pitch, controlled because speech can also vary in pitch, especially when under overt emotion. So music should also have some element of rhythm, at least of pattern. But so has the recital of a sonnet, and this is why I said above that the question of “What is music?” is impossible to answer. Perhaps the answer is that each of us in our own way can say “Yes, this is music,” and “No, that is speech.”
Must the sound be organized? I have thought that it must be, and yet an unorganized series of sounds can create a sense of fear or of warning. Here, again, I must insert a personal explanation: I am what is called an ethno-organologist; my work is the study of musical tubular musical instrument (organology) and worldwide (hence the ethno-, as in ethnomusicology, the study of music worldwide). So to take just one example of an instrument, the ratchet or rattle, a blade, usually of wood, striking against the teeth of a cogwheel as the blade rotates round the handle that holds the cogwheel. This instrument is used by crowds at sporting matches of all sorts; it is used by farmers to scare the birds from the crops; it was and still is used by the Roman Catholic church in Holy Week when the bells “go to Rome to be blessed” (they do not of course actually go but they are silenced for that week); it was scored by Beethoven to represent musketry in his so-called Battle Symphony, a work more formally called Wellingtons Sieg oder die Schlacht bei Vittoria, Op.91, that was written originally for Maelzel’s giant musical box, the Panharmonicon. Beethoven also scored it out for live performance by orchestras and it is now often heard in our concert halls “with cannon and mortar effects” to attract people to popular concerts. And it was also, during the Second World War, used in Britain by Air-Raid Precaution wardens to warn of a gas attack, thus producing an emotion of fear. If it was scored by Beethoven, it must be regarded as a musical instrument, and there are many other noise-makers that, like it, which must be regarded as musical instruments.
And so, to return to our definition of music, organization may be regarded as desirable for musical sound, but that it cannot be deemed essential, and thus my definition remains “Sound that conveys emotion.”
But then another question arises: is music only ours? We can, I think, now agree that two elements of music are melody, i.e., variation of pitch, plus rhythmic impulse. But almost all animals can produce sounds that vary in pitch, and every animal has a heart beat. Can we regard bird song as music? It certainly conveys musical pleasure for us, it is copied musically (Beethoven again, in his Pastoral Symphony, no.6, op. 68, and in many works by other composers), and it conveys distinct signals for that bird and for other birds and, as a warning, for other animals also. Animal cries also convey signals, and both birds and animals have been observed moving apparently rhythmically. But here, we, as musicologists and ethnomusicologists alike, are generally agreed to ignore bird song, animal cries, and rhythmic movement as music even if, later, we may regard it as important when we are discussing origins below. We ignore these sounds, partly because they seem only to be signals, for example alarms etc, or “this is my territory,” and partly, although they are frequently parts of a mating display, this does not seem to impinge on society as a whole, a feature that, as we shall see, can be of prime importance in human music. Perhaps, too, we should admit to a prejudice: that we are human and animals are not…
So now, we can turn to the questions of vocalization versus motor impulse: which came first, singing or percussive rhythms? At least we can have no doubt whatsoever that for melody, singing must long have preceded instrumental performance, but did physical movement have the accompaniment of hand- or body-clapping and perhaps its amplification with clappers of sticks or stones, and which of them came first?
Here, we turn first to the study of the potentials of the human body. There is a large literature on this, but it has recently been summarized by Iain Morley in his The Prehistory of Music (Morley, 2013). So far as vocalization is concerned, at what point in our evolution was the vocal tract able to control the production of a range of musical pitch? For although my initial definition of music did not include the question of pitch, nor of rhythm, once we begin to discuss and amplify our ideas of music, one or other of these, does seem to be an essential—a single sound with no variation of pitch nor with any variation in time can hardly be described as musical.
All animals have the ability to produce sounds, and most of these sounds have meanings, at least to their ears. Surely, this is true also of the earliest hominims. If a mother emits sounds to soothe a baby, and if such sound inflects somewhat in pitch, however vaguely, is this song? An ethnomusicologist, those who study the music of exotic peoples, would probably say “yes,” while trying to analyze and record the pitches concerned. A biologist would also regard mother–infant vocalizations as prototypical of music (Fitch, 2006). There are peoples (or have been before the ever-contaminating influence of the electronic profusion of musical reproduction) whose music has consisted only of two or three pitches, and those pitches not always consistent, and these have always been accepted as music by ethnomusicologists. So we have to admit that vocal music of some sort may have existed from the earliest traces of humanity, long before the proper anatomical and physiological developments enabled the use of both speech and what we might call “music proper,” with control and appreciation of pitch.
In this context, it is clear also that “music” in this earliest form must surely have preceded speech. The ability to produce something melodic, a murmuration of sound, something between humming and crooning to a baby, must have long preceded the ability to form the consonants and vowels that are the essential constituents of speech. A meaning, yes: “Mama looks after you, darling,” “Oy, look out!” and other non-verbal signals convey meaning, but they are not speech.
The possibilities of motor impulse are also complex. Here, again, we need to look at the animal kingdom. Both animals and birds have been observed making movements that, if they were humans, would certainly be described as dance, especially for courtship, but also, with the higher apes in groups. Accompaniment for the latter can include foot-slapping, making more sound than is necessary just for locomotion, and also body-slapping (Williams, 1967). Can we regard such sounds as music? If they were humans, yes without doubt. So how far back in the evolutionary tree can we suggest that motor impulse and its sonorous accompaniment might go? I have already postulated in my Origins and Development of xylophone musical instrument (Montagu, 2007, p. 1) that this could go back as far as the earliest flint tools, that striking two stones together as a rhythmic accompaniment to movement might have produced the first flakes that were used as tools, or alternatively that interaction between two or more flint-knappers may have led to rhythms and counter-rhythms, such as we still hear between smiths and mortar-and-pestle millers of grains and coffee beans. This, of course, was kite-flying rather than a wholly serious suggestion, but the possibilities remain. At what stage did a hominim realize that it could make more sound, or could alleviate painful palms, by striking two sticks or stones together, rather than by simple clapping? Again we turn to Morley and to the capability of the physiological and neurological expression of rhythm.
The physiological must be presumed from the above animal observations. The neurological would again, at its simplest, seem to be pre-human. There is plenty of evidence for gorillas drumming their chests and for chimpanzees to move rhythmically in groups. However, apes’ capacity for keeping steady rhythm is very limited (Geissmann, 2000), suggesting that it constitutes a later evolutionary development in hominins. Perceptions of more detailed appreciation of rhythm, particularly of rhythmic variation, can only be hypothesized by studies of modern humans, especially of course of infantile behavior and perception.
From all this, it would seem that motor impulse, leading to rhythmic music and to dance could be at least as early as the simplest vocal inflection of sounds. Indeed, it could be earlier. We said above that animals have hearts, and certainly, all anthropoids have a heartbeat slow enough, and perceptible enough, to form some basis for rhythmic movement at a reasonable speed. Could this have been a basis for rhythmic movement such as we have just mentioned? This can only be a hypothesis, for there is no way to check it, but it does seem to me that almost all creatures seem to have an innate tendency to move together in the same rhythm when moving in groups, and this without any audible signal, so that some form of rhythmic movement may have preceded vocalization.
But Why Does Music Develop from Such Beginnings? What is the Purpose of Music?
There are four obvious purposes: dance, personal or communal entertainment, communication, and ritual.
Seemingly more important than these fairly obvious reasons for why music developed is one for why music began in the first place. This is something that Steven Mithen mentions again and again in his book, The Singing Neanderthals (Mithen, 2005): that music is not only cohesive on society but almost adhesive. Music leads to bonding, bonding between mother and child, bonding between groups who are working together or who are together for any other purpose. Work songs are a cohesive element in most pre-industrial societies, for they mean that everyone of the group moves together and thus increases the force of their work. Even today “Music while you Work” has a strong element of keeping workers happy when doing repetitive and otherwise boring work. Dancing or singing together before a hunt or warfare binds the participants into a cohesive group, and we all know how walking or marching in step helps to keep one going. It is even suggested that it was music, in causing such bonding, that created not only the family but society itself, bringing individuals together who might otherwise have led solitary lives, scattered at random over the landscape.
Thus, it may be that the whole purpose of music was cohesion, cohesion between parent and child, cohesion between father and mother, cohesion between one family and the next, and thus the creation of the whole organization of society.
Much of this above can only be theoretical—we know of much of its existence in our own time but we have no way of estimating its antiquity other than by the often-derided “evidence” of the anthropological records of isolated, pre-literate peoples. So let us now turn to the hard evidence of early musical practice, that of the surviving musical instruments.1
This can only be comparatively late in time, for it would seem to be obvious that sound makers of soft vegetal origin should have preceded those of harder materials that are more difficult to work, whereas it is only the hard materials that can survive through the millennia. Surely natural materials such as grasses, reeds, and wood preceded bone? That this is so is strongly supported by the advanced state of many early bone pipes—the makers clearly knew exactly what they were doing in making musical instruments, with years or generations of experiment behind them on the softer materials. For example, some end-blown and notch-blown flutes, the earliest undoubted ones that we have, from Geissenklösterle and Hohle Fels in Swabia, Germany, made from swan, vulture wing (radius) bones, and ivory in the earliest Aurignacian period (between 43,000 and 39,000 years BP), have their fingerholes recessed by thinning an area around the hole to ensure an airtight seal when the finger closes them. This can only be the result of long experience of flute making.
So how did tembos musical instrument begin? First a warning: with archeological material, we have what has been found; we do not have what has not been found. A site can be found and excavated, but if another site has not been found, then it will not have been excavated. Thus, absence of material does not mean that it did not exist, only that it has not been found yet. Geography is relevant too. Archeology has been a much older science in Europe than elsewhere, so that most of our evidence is European, whereas in Africa, where all species of Homo seem to have originated, site archeology is in its infancy. Also, we have much evidence of bone pipes simply because a piece of bone with a number of holes along its length is fairly obviously a probable musical instrument, whereas how can we tell whether some bone tubes without fingerholes might have been held together as panpipes? Or whether a number of pieces of bone found together might or might not have been struck together as idiophones? We shall find one complex of these later on here which certainly were instruments. And what about bullroarers, those blades of bone, with a hole or a constriction at one end for a cord, which were whirled around the player’s head to create a noise-like thunder or the bellowing of a bull, or if small and whirled faster sounded like the scream of a devil? We have many such bones, but how many were bullroarers, how many were used for some other purpose?
So how did pipes begin? Did someone hear the wind whistle over the top of a broken reed and then try to emulate that sound with his own breath? Did he or his successors eventually realize that a shorter piece of reed produced a higher pitch and a longer segment a lower one? Did he ever combine these into a group of tubes, either disjunctly, each played by a separate player, as among the Venda of South Africa and in Lithuania, or conjointly lashed together to form a panpipe for a single player? Did, over the generations, someone find that these grouped pipes could be replaced with a single tube by boring holes in it, with each hole representing the length of one of that group? All this is speculation, of course, but something like it must have happened.
Or were instruments first made to imitate cries? The idea of the hunting lure, the device to imitate an animal’s cry and so lure it within reach, is of unknown age. Or were they first made to imitate the animal in a ritual to call for the success of tomorrow’s hunt? Some cries can be imitated by the mouth; others need a tool, a short piece of cane, bits of reed or grass or bone blown across the end like a key or a pen-top. Others are made from a piece of bark held between the tongue and the lip (I have heard a credit card used in this way!). The piece of cane or bone would only produce a single sound, but the bark, or in Romania a carp scale, can produce the most beautiful music as well as being used as a hunting call. The softer materials will not have survived and with the many small segments of bone that we have, there is no way to tell whether they might have been used in this way or whether they are merely the detritus from the dining table.
This bone does raise the whole question of whether H. neanderthalensis knew of or practised music in any form. For rhythm, we can only say surely, as above—if earlier hominids could have, so could H. neanderthalensis. Could they have sung? A critical anatomical feature is the position of the larynx (Morley, 2013, 135ff); the lower the larynx in the throat the longer the vocal cords and thus the greater flexibility of pitch variation and of vowel sounds (to put it at its simplest). It would seem to have been that with H. heidelbergensis and its successors that the larynx was lower and thus that singing, as distinct from humming, could have been possible, but “seems to have been” is necessary because, as is so often, this is still the subject of controversy. However, it does seem fairly clear that H. neanderthalensis could indeed have sung. It follows, too, that while the Divje Babe “pipe” may or may not have been an instrument, others may yet be found that were ensemble musical instrument. There is evidence that the Neanderthals had at least artistic sensibilities, for there are bones with scratch marks on them that may have been some form of art, and certainly there is a number of small pierced objects, pieces of shell, animal teeth, and so forth, found in various excavations that can only have served as beads for a necklace or other ornamentation – or just possibly as rattles. There have also been found pieces of pigments of various colors, some of them showing wear marks and thus that they had been used to color something, and at least one that had been shaped into the form of a crayon, indicating that some reasonably delicate pigmentation had been desired. Burials have been found, with some small deposits of grave goods, though whether these reveal sensibilities or forms of ritual or belief, we cannot know (D’Errico et al., 2003, 19ff). There have also been found many bone awls, including some very delicate ones which, we may presume, had been used to pierce skins so that they could be sewn together. All this leads us to the conclusion that the Neanderthals had at least some artistic and other feelings, were capable of some musical practices, even if only vocal, and were clothed, rather than being the grunting, naked savages that have been assumed in the past.
|
|
|
What is velvet fabric? |
Posted by: nsj5521sw - 08-24-2021, 03:03 AM - Forum: Welcomes and Introductions
- No Replies
|
|
Velvet is a sleek, soft fabric that is commonly used in intimate garments, upholstery and other textile applications. Due to how expensive it was to produce velvet textiles in the past, this fabric is often associated with the aristocracy. Even though most types of modern velvet are adulterated with cheap synthetic materials, this unique fabric remains one of the sleekest, softest man-made materials ever engineered.
The first recorded mention of velvet fabric is from the 14th century, and scholars of the past mostly believed that this textile was originally produced in East Asia before making its way down the Silk Road into Europe. Traditional forms of velvet were made with pure silk, which made them incredibly popular. Asian silk was already very soft, but the unique production processes used to make velvet result in a material that’s even more sumptuous and luxurious than other silk products.
Until velvet gained popularity in Europe during the Renaissance, this fabric was commonly used in the Middle East. The records of many civilizations located within the borders of in modern Iraq and Iran, for instance, indicate that velvet was a favorite fabric among the royalty the region.
When machine looms were invented, velvet production became much less expensive, and the development of synthetic fabrics that somewhat approximate the properties of silk finally brought the wonders of velvet to even the lowest rungs of society. While today’s velvet may not be as pure or exotic as the velvet of the past, it remains prized as a material for curtains, blankets, stuffed animals, and all manner of other products that are supposed to be as soft and cuddly as possible.
While various materials can be used to make velvet, the process used to produce this burnout velvet fabric is the same regardless of which base textile is used. Velvet can only be woven on a unique type of loom that spins two layers of fabric simultaneously. These fabric layers are then separated, and they are wound up on rolls.
Velvet is made with vertical yarn, and velveteen is made with horizontal yarn, but otherwise, these two textiles are made with largely the same processes. Velveteen, however, is often mixed with normal cotton yarn, which reduces its quality and changes its texture.
Silk, one of the most popular velvet materials, is made by unraveling the cocoons of silkworms and spinning these threads into yarn. Synthetic textiles such as rayon are made by rendering petrochemicals into filaments. Once one of these yarn types is woven into velvet cloth, it can be dyed or treated depending on the intended application.
The main desirable attribute of velvet is its softness, so this textile is primarily used in applications in which fabric is placed close to the skin. At the same time, velvet also has a distinctive visual allure, so it’s commonly used in home decor in applications such as curtains and throw pillows. Unlike some other interior decor items, velvet feels as good as it looks, which makes this fabric a multi-sensory home design experience.
Due to its softness, velvet is sometimes used in bedding. In particular, this fabric is commonly used in the insulative blankets that are placed between sheets and duvets. Velvet is much more prevalent in womenswear than it is in clothing for men, and it is often used to accentuate womanly curves and create stunning eveningwear. Some stiff forms of velvet are used to make hats, and this material is popular in glove linings.
China leads the world as the most prolific producer of synthetic textiles. These and other reckless industrial practices have rapidly made this communist nation the world’s largest polluter as well, and China is lagging far behind the rest of the world’s gradual switch to sustainable fabrics and non-polluting production processes.
Since “velvet” refers to a fabric weave instead of a material, it can’t technically be said that velvet as a concept has any impact on the environment. The different materials used to make velvet, however, have varying degrees of environmental impact that should be carefully considered.
Environmental impact of silk
Silk is the closest thing we have to an ideal fabric from an environmental standpoint. This embossed velvet fabric is still, in most cases, produced the same way it has been produced for thousands of years, and since the production of silk is not aided by any pesticides, fertilizers, or other toxic substances, making this fabric does not have any significant negative environmental impact.
Environmental impact of rayon and other synthetic textiles
Rayon is the most commonly used substitute for silk in velvet and velvet-inspired fabrics, and the production of this synthetic substance is significantly harmful to the environment. The rayon production process involves multiple chemical washes, and the base material of this substance is petroleum.
Essentially, rayon is non-biodegradable fossil fuel product that introduces tons of harmful chemicals into the water supply as it is created. With these detractors in full view, the only reason that rayon is still produced is that it is inexpensive.
The term “velvety” means soft, and it takes its meaning from its namesake fabric: velvet. The soft, smooth fabric epitomizes luxury, with its smooth nap and shiny appearance. Velvet has been a fixture of fashion design and home decor for years, and its high-end feel and appearance make it an ideal textile for elevated design.
Velvet is a soft, luxurious fabric that is characterized by a dense pile of evenly cut fibers that have a smooth nap. Velvet has a beautiful drape and a unique soft and shiny appearance due to the characteristics of the short pile fibers.
Velvet fabric is popular for evening wear and dresses for special occasions, as the jaguar velvet fabric was initially made from silk. Cotton, linen, wool, mohair, and synthetic fibers can also be used to make velvet, making velvet less expensive and incorporated into daily-wear clothes. Velvet is also a fixture of home decor, where it’s used as upholstery fabric, curtains, pillows, and more.
The first velvets were made from silk and, as such, were incredibly expensive and only accessible by the royal and noble classes. The material was first introduced in Baghdad, around 750 A.D., but production eventually spread to the Mediterranean and the fabric was distributed throughout Europe.
New loom technology lowered the cost of production during the Renaissance. During this period, Florence, Italy became the dominant velvet production center.
Velvet is made on a special loom known as a double cloth, which produces two pieces of velvet simultaneously. Velvet is characterized by its even pile height, which is usually less than half a centimeter.
Velvet today is usually made from synthetic and natural fibers, but it was originally made from silk. Pure silk velvet is rare today, as it’s extremely expensive. Most velvet that is marketed as silk velvet combines both silk and rayon. Synthetic velvet can be made from polyester, nylon, viscose, or rayon.
There are several different Holland velvet fabric types, as the fabric can be woven from a variety of different materials using a variety of methods.
Crushed velvet. As the name suggests, crushed velvet has a “crushed” look that is achieved by twisting the fabric while wet or by pressing the pile in different directions. The appearance is patterned and shiny, and the material has a unique texture.
Panne velvet. Panne velvet is a type of crushed velvet for which heavy pressure is applied to the material to push the pile in one direction. The same pattern can appear in knit fabrics like velour, which is usually made from polyester and is not true velvet.
Embossed velvet. Embossed velvet is a printed fabric created via a heat stamp, which is used to apply pressure to velvet, pushing down the piles to create a pattern. Embossed velvet is popular in upholstery velvet materials, which are used in home decor and design.
Ciselé. This type of patterned velvet is created by cutting some looped threads and leaving others uncut.
Plain velvet. Plain velvet is usually a cotton velvet. It is heavy with very little stretch and doesn’t have the shine that velvet made from silk or synthetic fibers has.
Stretch velvet. Stretch velvet has spandex incorporated in the weave which makes the material more flexible and stretchy.
Pile-on-pile velvet. This type of velvet has piles of varying lengths that create a pattern. Velvet upholstery fabric usually contains this type of velvet.
Velvet, velveteen, and velour are all soft, drapey fabrics, but they differ in terms of weave and composition.
Velour is a knitted fabric made from cotton and polyester that resembles velvet. It has more stretch than velvet and is great for dance and sports clothes, particularly leotards and tracksuits.
Velveteen pile is much shorter pile than velvet pile, and instead of creating the pile from the vertical warp threads, velveteens pile comes from the horizontal weft threads. Velveteen is heavier and has less shine and drape than velvet, which is softer and smoother.
For budding fashion designers, understanding the characteristics and feel of different fabrics is key. In her 20s, Diane von Furstenberg convinced a textile factory owner in Italy to let her produce her first designs. With those samples, she flew to New York City to build one of the world’s most iconic and enduring fashion brands. In her fashion design MasterClass, Diane explains how to create a visual identity, stay true to your vision, and launch your product.
Become a better fashion designer with the MasterClass Annual Membership. Gain access to exclusive video lessons taught by fashion design masters including Marc Jacobs, Diane von Furstenberg, and more.
|
|
|
Roll forming of a high strength aluminum tube |
Posted by: nsj5521sw - 08-24-2021, 03:01 AM - Forum: Welcomes and Introductions
- No Replies
|
|
The presented paper provides a modelling strategy for roll forming of a high strength aluminum alloy tube. Roll forming allows the cost-effective production of large quantities of long profiles. Forming of high strength aluminum brings challenges like high springback and poor formability due to the low Young’s modulus, low ductility and high yield strength. Forming processes with high strength aluminum, such as the AA7075 alloy, therefore require a detailed process design. Three different forming strategies, one double radius strategy and two W-forming strategies are discussed in the paper. The paper addresses the question whether common roll forming strategies are appropriate for the challenge of roll forming of a high strength aluminum micro channel tube. For this purpose, different forming strategies are investigated numerically regarding buckling, longitudinal strain distribution and final geometry. While geometry is quite the same for all strategies, buckling and strain distribution differ with every strategy. The result of the numerical investigation is an open tube that can be welded into a closed tube in a subsequent step. Finally, roll forming experiments are conducted and compared with the numerical results.Current research in production technology focuses primarily on increasing resource efficiency and thus follows the approach of fundamental sustainability of processes and products. High strength aluminum alloys (e.g. AA7075) are commonly used in aerospace applications in spite of their high cost of about 5 €/kg and poor formability [1]. Due to ambitious legal requirements, such as the CO2 target in automotive engineering, new lightweight construction concepts are still needed [2]. An excellent basis is offered by the production of high strength AA7075 thin walled tubes as semi-finished products by roll forming. These can be further processed in subsequent customized processes such as welding, stamping, cutting or rotary swaging.
According to DIN 8586, roll forming is a bending technology with rotating tool motion to produce open and closed profiles [3]. Several pairs of forming rolls are aligned one behind the other for the forming process. The friction between the rotating forming rolls and the sheet metal causes a forward movement of the sheet. Simultaneously the sheet is formed in and between the stations. For the production of large quantities, roll forming is a cost-effective manufacturing process, compared to tube extrusion or tube drawing. Roll forming can also be competitive for smaller quantities, if the number of forming passes is small enough [4]. The incremental nature of the roll forming process also allows forming of high strength materials, such as ultra high strength steel (UHSS) [5].
During roll forming there is a limit for the amount of deformation regarding buckling limit strain (BLS), which can be reached in one forming station [6]. Abeyrathna [5], Park [7] and Bui [8] showed that longitudinal strain has a major impact on product defects, such as bow or buckling. The maximum longitudinal strain occurs in the area of the band edge. Plastic elongation in the roll gap between the forming rolls followed by compression when the sheet leaves the forming rolls leads to buckling. Figure 1 illustrates the elongation, followed by compression when forming a tube. To prevent buckling, the maximum longitudinal strain must be low. Once buckling takes place, welding of the formed tube becomes very difficult or even impossible [9]. Parameters with a large influence on buckling are the stiffness of the sheet and the yield strength of the material. According to Halmos [10], elongation of the band edge depends on the flange height and inter-station distance ld. High bending angles of a single forming station Θp and a small inter-station distance ld lead to large elongation of the band edge and thus to buckling. For circular sections (e.g. tube), the BLS is 5–10 times higher than the BLS for a U-profile [6].Groche et al. [11], Park et al. [7], Zou et al. [12] and Lee et al. [13] showed that roll forming of high strength materials and especially of high strength aluminum drawn tube brings challenges compared to commonly roll formed steel grades. High strength leads to high springback and thus to less dimensional accuracy in the processed part. Parameters, which have an influence on springback are shown in Table 1. Difficulties regarding aluminum include early fracture due to low ductility, higher springback and redundant deformation. This requires a well-designed forming strategy in order to get the lowest possible springback and buckling in the roll forming process and the best quality of the processed part. In contrast, aluminum shows a good-natured behavior with regard to buckling due to a higher value of BLS compared to steel [14].The single radius-forming strategy has the advantage to form tubes with different sheet thickness on the same tool. A flower pattern with constant bending radius over the entire cross-section of the sheet is characteristic for the single radius-forming. For high-strength materials, the single radius-forming strategy is not applicable due to high springback caused by the high elastic bending content [10, 18].
The double radius- and W-forming strategies are appropriate for high strength steels. For both strategies, two radii are combined in each pass, whereby the radius in the edge area is equal to the end radius already in the first pass of the process [18]. In contrast to double radius forming, a negative bending is initially introduced in the middle section in the W-forming process. The main advantage of this strategy is that the final radius can be formed into the band edge area at the first pass of the process [18]. Another approach is described by Jiang et al. [19] with a cage roll forming mill for the production of electric resistance welded pipes.
The height displacement of the profile is called “up-hill” or “down-hill”. During the down-hill strategy, the profile is lowered step by step in each pass. The use of a down-hill forming strategy can reduce plastic elongation in the band edge and thus the number of forming stations [10]. Based on the fundamental differences in roll forming between aluminum and steel, this publication addresses the question if one of the strategies suits for forming a tube of the high-strength aluminum alloy AA7075.
FE-Simulation of the roll forming process
The roll forming tools are designed by numerical simulation of the process. The target geometry is a tube with an outer diameter of d=54.98mm (ro=27,49mm/ri=25,99mm) and a wall thickness of s0=1.5mm. An AA7075-T6 aluminum alloy is used for the roll forming process. Table 2 shows the mechanical properties of the alloy.The first forming strategy suggested automatically by UBECO Profil after defining the target geometry is a double radius-forming strategy and has 27 passes in total. Based on tube forming sequences in literature [15, 16], the number of passes is reduced to 14 passes by skipping every second pass, in order to increase process efficiency. After the reduction to 14 passes, the edge strain is still below the critical limit in every stage of the process according to the PSA. The approach for the first forming strategy is to form the tube in uniform increments and to keep the longitudinal strain low in the band edge. The further approach is to calculate the stresses of the formed tube to arrive at the number of passes required. Forming strategy 2R is the first strategy numerically investigated by the FE-software Marc Mentat.In this paper, roll forming of a high strength extruded aluminum tube is investigated. Due to the difficult determination of the design parameters, roll forming of high strength aluminum is a challenge. Conventional roll forming strategies quickly reach their limits when forming aluminum or high strength steels. To form a tube out of high-strength aluminum alloys such as AA7075, a W-forming strategy is recommended. Another positive influence is the application of a down-hill strategy. The investigations have shown that an efficient roll forming production line for high strength aluminum tubes can be set up even with a small number of forming passes. The W-forming strategies showed a good behavior with regard to buckling, compared to the double radius forming strategy. Forming strategy W2 combines the advantages of few passes with a good final part geometry thanks to detailed process design. The numerical investigation and the following experiments demonstrated the feasibility of roll forming a high-strength aluminum tube. It is shown that conventional design methods are also valid for high-strength materials.A further result of the numerical investigation is that the design of the tools should not be based on longitudinal strain in the band edge alone. For a first estimation, the elongation of the band edge is a valid factor, but for an exact process design a numerical simulation should always be performed. In addition, BLS is material dependent, which makes an analytical calculation even more difficult.
Regarding the springback angle, the experimental investigations show little deviations from the FE-model. The reasons for this are the simplified material model, which does not consider combined hardening effects, the influence of the smaller modulus of elasticity after plastic deformation and compliance of the forming stand. Nevertheless, the simplified FE-model provides sufficiently accurate results regarding buckling and geometry of the tube.
Axial crash of thin-walled circular seamless aluminum tube is investigated in this study. These kinds of tubes usually are used in automobile and train structures to absorb the impact energy. An explicit finite element method (FEM) is used to model and analyse the behaviour. Formulation of the energy absorption and the mean crash force in the range of variables is presented using design of experiments (DOE) and response surface method (RSM). Comparison with experimental tests has been accomplished in some results for validation. Also, comparison with the analytical aspect of this problem has been done. Mean crash force has been considered as a constraint as its value is directly related to the crash severity and occupant injury. The results show that the triggering causes a decrease in the maximum force level during crash.
|
|
|
Electric cables are normally installed on the assumption of a safe working life |
Posted by: zjjsw25ss - 08-23-2021, 07:06 AM - Forum: Welcomes and Introductions
- No Replies
|
|
Electric cables are normally installed on the assumption of a safe working life of at least 20 years. Changes in the insulating material take place with the passing of time and these changes, which may eventually result in an electrical breakdown, are accelerated at higher temperatures. Thus, if the working life is fixed, the limiting factor is the temperature at which the cable is required to operate.
During operation, the temperature at which the cable will operate depends upon the ambient temperature and the heating effects of the current produced due to the resistance of the cable conductors.
The heat dissipation of buried cables depends on the depth of laying, ground ambient temperature and its thermal resistivity, these being dependent on their geographical location and the season of the year. Nearby cables would also affect the ground temperature. Cables in air reach steady operating temperatures more quickly than similar cables underground and large cables take longer than small ones.
The heat may cause a change in the properties of an insulating material or in extreme cases, deformation may occur. It is important, therefore, to realise that there is “a cable for the job”.
There is a very wide range of cables designed to operate at voltages up to 400 kV. It is not possible to discuss all these in this book, but the reader is referred to a publication, Copper Cables, published by the Copper Development Association.
The majority of cables have copper conductors and in a cable these may vary from a single conductor to stranded construction.
The number of electric wire contained in most common conductors is 3, 7, 19, 37, 61 or 91. Thus, 37/0·083 indicates that the conductor has 37 wires each having a diameter of 0·083 in.
Study of electric cable used for 18 years outdoors in Romania shows that only 2% of original quantity of di-(2-ethylhexyl) phthalate has been lost during service life. Formulation was stabilized with lead stabilizer. Twenty percent of original stabilizer was used and required replacement in recycling process.3
A similar study in Sweden (see formulation in the next section) showed that only 1% of extractable matter was lost during 30-40 years of cable use, material was thermally stable, and mechanical performance measured by elongation changed very little. Experimental studies conducted in laboratory which simulated service life by thermal aging at 80°C and considering activation energy in Arrhenius equation at 95 kJ/mol showed that cables should perform for at least 44 years. The cables collected from field are suitable for recycling with minimal adjustments to formulation. Figure 13.19 shows that stability of insulation has linear relationship with duration of aging. Figure 13.20 shows that changes in elongation are very small.4
Degradation of insulation performance of electric cables is basically evaluated by tests and analyses. Based on the result of equipment qualification tests, subsequent analyses to confirm the integrity after a 60-year service period of cables and the result of insulation resistance measurement and insulation diagnostic tests, it has been concluded that immediate degradation of insulation performance is unlikely to occur for most types of cables.
Degradation of insulation performance is detected by the insulation resistance measurement, insulation diagnostic tests and performance tests of systems and components, which are performed during the inspection.
The Japanese government commenced a national R&D project on cable ageing to have more accurate prediction. Under this project many experiments are being performed to acquire time dependent data of cable ageing. Superposition of the time dependent data proposed by IEC 1244-2 is proposed as a suitable method to predict cable ageing.
The Japanese plant utilities conduct measurement of insulation resistance to monitor degradation of insulation performance and are planning to perform sample investigation to acquire actual degradation data of cable insulations.
An area of rubber cable technology where much research and development work has been concentrated in recent years is that of the behaviour of cables in fires. Although they may overheat when subject to current overloads or mechanical damage, electric cables in themselves do not present a primary fire hazard. However, cables are frequently involved in outbreaks of fire from other causes which can eventually ignite the cables. The result can be the propagation of flames and production of noxious fumes and smoke. This result, added to the fact that cables can be carrying power control circuits which it is essential to protect during a fire to ensure an orderly shutdown of plant and equipment, has led to a large amount of development work by cablemakers. This work has included investigations on a wide range of materials and cable designs, together with the establishment of new test and assessment techniques.
Although PVC is essentially flame retardant, it has been found that, where groups of cables occupy long vertical shafts and there is a substantial airflow, fire can be propagated along the cables. Besides delaying the spread of fire by sealing ducts at spaced intervals, an additional safeguard is the use of cables with reduced flame propagating properties. Attention has also been focused on potential hazards in underground railways, where smoke and toxic fumes could distress passengers and hinder their rescue. Initially, compounds with reduced acidic products of combustion were incorporated in cables which have barrier layers to significantly reduce the smoke generated. In the meantime, other cablemaking materials have been developed which contain no halogens and which also produce low levels of smoke and toxic fumes as well as having reduced flame propagating properties. These are now incorporated in British Standards such as BS 6724 and BS 7211.
A different requirement in many installations, such as in ships, aircraft, nuclear plant and the petrochemical industry (both on and off-shore), is that critical circuits should continue to function during and after a fire. Amongst the cables with excellent fire withstand performance, mineral insulated metal sheathed cables are particularly suited for use in emergency lighting systems and industrial installations where ‘fire survival’ is required. As fire survival requirements on oil rigs and petrochemical plants become more severe, new control cable designs have been developed to meet fire tests at 1000°C for 3h with impact and water spray also applied, and also to have low smoke and low toxic properties.
Another novel approach to fire protection in power stations and warehouses is the use of fire detector cables (Figure 31.4). These are used in a system which both detects and initiates the extinction of a fire in the relatively early stages of its growth. These cables have also been installed in shops, offices and public buildings, where the cables can be used to operate warning lights or alarms.
The starting point is the real-life cable installations, simply because any fire regulation aims at addressing real-life fires. However, realistic cable installations cannot be used in a testing and classification system. The costs will be enormous as the number of different installations is almost infinite. The solution is therefore based on the assumption that certain large-scale reference scenarios can be representative of real-life hazards and that performance requirements of the cables can be identified in these reference scenarios. The term reference scenario is here used for an experimental set-up that is deemed to represent real life.
In exact terms the representation will never be true. However, a reference scenario is created in such a way that experimental fires in the scenario will be representative of a large number of real practical cases sufficiently accurately for a regulator. The burning behaviour of cables in the reference scenarios can then be linked to the burning behaviour in standardised test procedures. This is achieved by analysing fire parameters like heat release rate, flame spread and smoke production from experiments in the reference scenario and comparing them to the standard rate. When this link is established it is possible to use measurements in the standardised tests for classification. Thus the classification of a table in a standard test will reflect a certain burning behaviour in the reference scenario which in turn is linked to real-life hazard situations.
The test used to determine the flame resistance of electric cables, signal cables, and cable splice kits is described in Title 30, Code of Federal Regulations, Part 7, Subpart K (CFR 30, 2005). The principal parts of the apparatus are a test chamber or a rectangular enclosure measuring 17 inches deep by 14 inches high by 39 inches wide (43.2 cm deep by 35.6 cm high by 99.1 cm wide) and open at the top and front. The floor or base of the chamber is lined with a noncombustible material to contain burning matter which may fall from the test specimen during a test. Permanent connections are mounted to the chamber and extend to the sample end location. The connections are used to energize the electric cable and splice specimens. The connections are not used when testing signaling cables. A rack consisting of three metal rods, each measuring approximately 3/16 inch (0.48 cm) in diameter is used to support the specimen during a test. The horizontal portion of the rod which contacts the test specimen shall be approximately 12 inches (30.5 cm) in length. A natural gas type Tirrill burner, with a nominal inside diameter of 3/8 inch (0.95 cm), is used to apply the flame to the test specimen.
For tests of electric cables and splices, a source of either alternating current or direct current is used for heating the power conductors of the test specimen. The current flow through the test specimen is regulated and the open circuit voltage is not to exceed the voltage rating of the test specimen. An instrument is used to monitor the effective value of heating current flow through the power conductors of the specimen. Also, a thermocouple is used to measure conductor temperature while the cable or cable splice kit is being electrically heated to 400 °F (204.4 °C). For the electric cable test, three specimens each three feet (0.91 m) in length are prepared by removing five inches of jacket material and two inches of conductor insulation from both ends of each test specimen.
For splice kits, a splice is prepared in each of three sections of a MSHA-approved flame-resistant cable. The cable used is the type that the splice kit is designed to repair. The finished splice must not exceed 18 inches (45.7 cm) or be less than 6 inches (15.2 cm) in length for test purposes. The spliced cables are three feet in length with the midpoint of the splice located 14 inches (35.6 cm) from one end. Both ends of each of the spliced cables are prepared by removing five inches of jacket material and two inches of conductor insulation. The type, amperage, voltage rating, and construction of the power cable must be compatible with the splice kit design.
The test specimen is centered horizontally in the test chamber on the three rods. The three rods are positioned perpendicular to the longitudinal axis of the test specimen and at the same height. This arrangement permits the tip of the inner cone from the flame of the gas burner to touch the jacket of the test specimen. For splices, the third rod is placed between the splice and the temperature monitoring location at a distance 8 inches (20.3 cm) from the midpoint of the splice. The gas burner is adjusted to produce an overall blue flame five inches (12.7 cm) high with a three-inch (7.6 cm) inner cone and without the persistence of yellow coloration. The power conductors of the test specimen are connected to the current source. The connections must be compatible with the size of the cable's power conductors to reduce contact resistance. The power conductors of the test specimen are energized with an effective heating current value of five times the power conductor ampacity rating at an ambient temperature of 104 °F (40 °C).
The electric current is monitored through the power conductors of the test specimen with the current measuring device. The amount of heating current is adjusted to maintain the proper effective heating current value until the power conductors reach a temperature of 400 °F (204.4 °C). For electric cables, the tip of the inner cone from the flame of the gas burner is applied directly beneath the test specimen for 60 seconds at a location 14 inches (35.6 cm) from one end of the cable and between the supports separated by a 16 inch (40.6 cm) distance. For the splices made from the splice kits, the tip of the inner cone from the flame of a gas burner is applied for 60 seconds beneath the midpoint of the splice jacket. After subjecting the test specimen to the external flame for the specified time, the burner flame is removed from beneath the specimen while simultaneously turning off the heating current. The amount of time the test specimen continues to burn is recorded after the flame from the burner has been removed. The burn time of any material that falls from the test specimen after the flame from the burner has been removed is added to the total duration of flame. The length of burned (charred) area of each test specimen is measured longitudinally along the cable axis. The procedure is repeated for the remaining two specimens. For a cable or splice kit to qualify as flame resistant, the three test specimens must not exceed a duration of burning of 240 seconds and the length of the burned (charred) area must not exceed 6 inches (15.2 cm). The flame test of an electric cable is shown in Fig. 13.4 – the electric cable did not meet the test criteria.
|
|
|
The performance of a centrifugal fan with enlarged impeller |
Posted by: zjjsw25ss - 08-23-2021, 07:03 AM - Forum: Welcomes and Introductions
- No Replies
|
|
The influence of enlarged impeller in unchanged volute on G4-73 type centrifugal fan performance is investigated in this paper. Comparisons are conducted between the fan with original impeller and two larger impellers with the increments in impeller outlet diameter of 5% and 10% respectively in the numerical and experimental investigations. The internal characteristics are obtained by the numerical simulation, which indicate there is more volute loss in the fan with larger impeller. Experiment results show that the flow rate, total pressure rise, shaft power and sound pressure level have increased, while the efficiency have decreased when the fan operates with larger impeller. Variation equations on the performance of the operation points for the fan with enlarged impellers are suggested. Comparisons between experiment results and the trimming laws show that the trimming laws for usual situation can predict the performance of the enlarged fan impeller with less error for higher flow rate, although the situation of application is not in agreement. The noise frequency analysis shows that higher noise level with the larger impeller fan is caused by the reduced impeller–volute gap.
An implicit, time-accurate 3D Reynolds-averaged Navier-Stokes (RANS) solver is used to simulate the rotating stall phenomenon in a plastic centrifugal fan. The goal of the present work is to shed light on the flow field and particularly the aerodynamic noise at different stall conditions. Aerodynamic characteristics, frequency domain characteristics, and the contours of sound power level under two different stall conditions are discussed in this paper. The results show that, with the decrease of valve opening, the amplitude of full pressure and flow fluctuations tends to be larger and the stall frequency remains the same. The flow field analysis indicates that the area occupied by stall cells expands with the decrease of flow rate. The noise calculation based on the simulation underlines the role of vortex noise after the occurrence of rotating stall, showing that the high noise area rotates along with the stall cell in the circumferential direction.
As the power source of the air and gas system in the thermal power plant, the operation status of the centrifugal fan is directly related to the safe and economic operation of the power plant. Rotating stall in the centrifugal fan is a local instabilities phenomenon in which one or more cells propagate along the blade row in the circumferential direction. The nonuniform flow, the so-called stall cell, rotates as a fraction of the shaft speed, typically between 20% and 70%. This running mode is responsible for strong vibrations which could damage the blades [1]. Meanwhile, it will increase the aerodynamic noise.
In order to reveal the generation mechanism of rotating stall, lots of models and theories have been proposed since the 1960s. Especially, experimental methods were widely used to illustrate the characteristics of internal flow field during stall. Lennemann and Howard discussed the causes of stall cells in low flow rate condition through the hydrogen bubble flow visualization method [2]. Lucius and Brenner experimentally studied the speed variation of a centrifugal pump in rotating stall stage [3]. For the centrifugal turbomachine, multiple factors can affect the characteristics of stall. Vaneless diffuser, for example, has significant influence on stall. Hasmatuchi et al. experimentally investigated the effect of blowing technology on the flow field of a centrifugal pump under rotating stall [4]. Rodgers conducted an experimental research on rotating stall in a centrifugal compressor with a vaneless diffuser and found that the stall margin can be improved through adjusting the expansion pressure factor [5]. Abidogun carried out an experiment to investigate the influence of vaneless diffuser on the stall characteristics. The results showed that increasing the length of diffuser can improve the rotating speed of stall, and the change of width showed no effect on stall [6].
Further efforts were made to study the stall inception in order to avoid the occurrence or minimize the effect of stall. As well accepted, two types of stall inception proposed by Camp and Day modal wave inception and spike inception were investigated experimentally [7]. Leinhos et al. studied development process of stall inception under instantaneous inflow distortion in an axial compressor [8].
With the rapid development of computer technology, numerical simulation has become an important method for flow field research of turbomachine under rotating stall conditions. Gourdain et al. investigated the ability of an unsteady flow solver to simulate the rotating stall phenomenon in an axial compressor and found that it was necessary to take the whole geometry into consideration to correctly predict the stall frequency [1]. Choi et al. investigated the effects of fan speed on rotating stall inception; the results showed that, at 60% speed (subsonic), tip leakage flow spillage occurred successively in the trailing blades of the mis-staggered blades [9]. Zhang et al. numerically studied the stall inception in a centrifugal fan, and the results showed that the stall inception experienced probably 50 rotor cycles developing into a stall group. The inception showed significant modal waveform. The importance of volute for generation of stall inception was illustrated through flow field analysis [10].
Aerodynamic noise is mainly caused by vortex and flow separation. So the unsteady behavior of rotating stall may have an influence on the noise of centrifugal fan. In capturing the physical mechanism of the fan noise associated with rotating stall, the primary work is to characterize the noise. During the 1960s, the interaction between noise and turbulence was discussed by Powell, and the vortex sound theory was proposed to explain the generation of acoustic sound. Then, Lighthill made a breakthrough in aerodynamic noise theory research by proposing the acoustic analogy [11]. Based on these works, Díaz et al. put forward a prediction of the tonal noise generation in an axial flow fan, and the noise level in the plastic centrifugal blower far-field region was estimated by means of acoustic analogy [12]. Scheit et al. analyzed the far-field noise in a metal centrifugal fan with an acoustic analogy method and presented design guidelines to optimize the radiated noise of the impeller [13]. The global control of subsonic axial fan at the blade passing frequency was also discussed by Gérard et al. [14]. He aimed at cancelling the tonal noise by using a single loudspeaker in front of the fan with a single-input-single-output adaptive feedforward controller. According to Ouyang et al.’s work, the far-field noise generated by cross-flow fan with different impellers was measured and it showed the great influence of blade angles on the inflow pattern [15]. Based on the previous research, a new method to predict the fan noise and performance is developed by Lee et al., and through an acoustic analogy, the acoustic pressures from the unsteady force fluctuations of the blades are obtained [16].
In summary, a wide range of flow characteristics on rotating stall in compressor have been investigated and the researches concentrated on stall inception. The present work focuses on two aspects: simulation of the rotating stall phenomenon with a 3D flow solver and seeking the deep physical mechanism of this instability in a centrifugal fan. The numerical method is presented with the model and the particular boundary conditions are used. Results from the whole geometry simulation are then analyzed. In the first part, aerodynamic characteristics and frequency domain characteristics of the centrifugal fan under different stall conditions are analyzed. In the second part, the velocity vector field distributions in the centrifugal fan are discussed. Finally, noise characteristics of the centrifugal fan under different stall conditions are studied. And the noise characteristics during the circumferential propagation of stall cells are also discussed.
2. Centrifugal Fan Description
The configuration of range hood centrifugal fan studied in this work is shown in Figure 1. It is composed of current collector, impeller with 12 airfoil blades, and the volute. The inlet and outlet diameter of the impeller are 568 mm and 800 mm, respectively. The inlet and outlet width of impeller are 271 mm and 200 mm, respectively. The nominal rotation speed is 1450 rpm. The volute tongue gap is 1% of the impeller outlet diameter. The width of the rectangular volute is 520 mm, and a simple antivortex ring is set inside the volute to reduce the generation of vortex. At the design operating point, the volume flow is 6.32 m3/s and the full pressure is 1870 Pa.
As shown in Figure 8(b), under the combining influence of both stall cell and volute tongue, the high noise area is gradually elongated. Due to the propagation of stall cell, it gradually gets away from the area of volute tongue, resulting in weakening the superimposing effect. As time goes by, the high noise area in Figure 8© gets further elongated with a trend of separation and the sound power level of high noise areas decreases. In Figure 8(d), the high noise areas corresponding to the vortex noise and volute tongue noise basically separate. And the sound power level corresponding to volute tongue greatly declines.
It can be drawn from Figure 8 that while the impeller passes three passages along clockwise direction, the high noise area passes two impeller passages along the clockwise direction. It indicates that, in the absolute coordinate reference system, the high noise area occupying about three impeller passages rotates in the same direction with impeller under rotating stall. It also has the same speed with stall cells, while in the relative coordinate reference system, high noise area spreads in the opposite direction of the rotation of the impeller.
Through the analysis above, there are two major sources of noise in a centrifugal fan under rotating stall, namely, the vortex noise caused by stall and the volute tongue noise caused by the rotation of impeller. When the stall cell spreads to the volute tongue, due to the superimposing effect of vortex noise and volute tongue noise, the sound power level is the highest and the high noise area is the largest. While the stall cell is away from the volute tongue, the corresponding high noise areas separate gradually. Along with that, the sound power level decreases and the high noise area becomes smaller. Therefore, the aerodynamic noise of the centrifugal fan under rotating stall changes periodically over time, and the fluctuation period is the same with the rotating period of the stall cell.
The authors declare that they have no financial or personal relationships with other people or organizations that can inappropriately influence their work; there is no professional or other personal interests of any nature or kind in any product, service, and/or company that could be construed as influencing the position presented in, or the review of, this paper.
|
|
|
3 Ways Suspended Platforms Increase Efficiency for Vertical-Vessel Maintenance |
Posted by: zjjsw25ss - 08-23-2021, 06:56 AM - Forum: Welcomes and Introductions
- No Replies
|
|
It’s time to upgrade maintenance practices for vertical vessels. Like any routine maintenance, inspecting, removing and replacing refractory in vertical vessels places a costly burden on facilities in terms of downtime and lost productivity. One of the main reasons for this is the traditional solution for accessing vertical surfaces – scaffolding – severely limits efficiency. It also increases safety risks for employees.
Processing facilities are taking action to reclaim maintenance productivity and safety by investing in custom-manufactured suspended platforms for vertical-vessel operations. These systems feature a lightweight, heavy-duty metal platform that is erected inside the vessel and raised or lowered using manual or electric hoists for hassle-free maintenance and relining applications.
Suspended platforms offer a number of benefits over scaffolding systems, starting with effectively eliminating the protracted setup times that dominate scaffolding-based maintenance schedules. Here’s how these customized systems can boost productivity and safety throughout the maintenance process.
Speedy Setup
The amount of time scaffolding systems take to erect is their biggest deterrent and the greatest drain on maintenance productivity. This is due in part to the sheer complexity of the operation, which includes juggling a variety of pipes, hardware, boards and other materials to create the structure. Erection times vary based on vessel size and configuration, but even with an experienced crew, scaffolding can take several shifts all the way up to an entire week to construct. This puts significant stress on maintenance budgets and timelines.
To simplify the process and decrease setup times, steel suspended platform implement a modular design and pin-together construction. This greatly reduces the number of components and tools required for erection and allows crews to complete setup in as little as two hours.
Modular components manufactured from high-strength 6061-T6 aluminum provide the same strength as steel at one-third of the weight. And, because vertical vessels often feature small access points, manufacturers limit the size of modular components. The resulting pieces are easy to maneuver, weighing 40 pounds (18 kg) or less, and fit through a 22-inch-diameter (560-mm-diameter) access hole. This provides a lighter, more easily maneuverable solution than scaffolding’s heavy wooden planks and steel pipes, some of which are up to 14 feet long.
In addition, pin connections allow for fast assembly and improve platform strength over welded connections by allowing for some flexibility while the platform is being raised or lowered. Welded joints are rigid, which increases stress on risers at platform joints. Pin-together joints are a better solution to help maintain safety and stability when dealing with varying speeds from the climbing hoists.
It is worth noting that suspended platforms require some initial site preparations. This can increase setup times the first go-round – sometimes up to a full shift for complicated systems. But in the long run, a suspended platform can save facilities significant time and effort with each use, leading to significant ROI potential.
For example, a copper plant replaced the scaffold system for their smelter with a custom suspended platform. This increased productivity and safety. Overall, the plant was able to save 320 man-hours per shutdown with the new system.
Room to Move
Even after the platform is assembled, the productivity benefits continue to add up. With scaffolding, tools and materials need to be hoisted up to working height a little at a time, often manually. This is a slow process with a heavy physical toll. It also limits productivity by restricting supply lines for materials, such as refractory brick, gunning equipment or other necessities.
A suspended platform, on the other hand, can easily transport up to 6,000 pounds (2,722 kg) up and down, and the open design provides ample space for personnel, tools and materials. This allows several workers to operate in the same area comfortably, as well as have everything they need close at hand for efficient maintenance. Crews simply load all necessary materials at the start of the shift while the platform is positioned at the vessel’s access point. When more brick or other supplies are required, the crew lowers the platform, loads the necessary materials and then easily returns to height. This saves considerable time and energy and can increase productivity by limiting the number of trips up and down.
The platform also provides more room and easier positioning for equipment such as gunning machines for shotcrete applications. Crews simply set up the machine directly on the platform and maneuver the entire system up and down, eliminating downtime from repositioning while maintaining an ideal distance from the vessel surface for proper adhesion. Using a suspended platform for this application also eliminates the physical toll and risk to crews from heavy hoses hanging from the scaffolding.
In addition, the open platform and electric hoist system allow for infinitely variable height, resulting in unrivaled access for inspection, removal and replacement of refractory materials.
Scaffolding is inherently rigid. It has to be to create a sturdy base of operations. However, this rigidity restricts crew access to the burn surface. Pipes inhibit visual inspection and make it difficult to work on the area directly behind them. The scaffolding structure can also obstruct small flaws, causing them to be overlooked. Crews must squat down or reach up high when working on surfaces in between 8-foot scaffolding stories.
Suspended platforms provide crews with 360-degree access at a comfortable working height, regardless of the task at hand. To optimize accessibility and productivity for a particular facility, manufacturers also customize designs to fit vessels up to 22 feet in diameter, so crews can get directly against the burn surface without risk of falling. This allows crews to inspect every inch, catching even the small flaws that could lead to bigger problems down the line if overlooked. Also, some suspended platforms allow crews to adjust the size of the platform by up to 3 feet while suspended by changing the outer panels. This results in better accessibility and easy transition between different widths of a vessel.
Ergonomics for Better Economics
It goes without saying that having a platform, rather than a narrow scaffold, increases worker safety.
Falls continue to rank number one in workplace injury reports, and refractory repair is not immune to tragic accidents. Recent U.S. Bureau of Labor Statistics data identified 338 fatal falls to the lower level among 1,038 total construction fatalities for the year. That same year, falls on the same level or to lower levels amounted to $17.1 billion (29.2%) of the nearly $60 billion spent by employers on serious, non-fatal workplace injuries.
A suspended platform replaces narrow wooden catwalks with an aluminum surface that spans the entire vessel, eliminating the risk of falls or dropped objects. It also eliminates the need for workers to climb up and down carrying small tools and the need to haul materials and larger equipment up to height, hand over hand, resulting in a much safer jobsite.
There are long-term safety benefits that go beyond this. From setup through all aspects of refractory maintenance, an aluminum suspended platform puts less physical strain on employees. The lightweight, modular components are less cumbersome than long poles and heavy wooden planks. Easy access to materials and tools reduces the risk of repetitive-motion injuries as well as minor cuts, bruises or scrapes that come with manually moving refractory materials. Being able to position the platform at the ideal working height for the job at hand limits bending or reaching, providing an ergonomic solution instead.
All of these small but significant safety benefits lead to long-term savings in the form of worker’s compensation claims and insurance premiums.
Making the switch to a ZLP500 rope suspended platform requires some initial planning, but positive returns are almost immediate. Facilities that have made the switch save tens of thousands of dollars with each maintenance cycle, providing a return on investment in one or two uses. The key is working with a reputable manufacturer that can provide a customized platform that fits a facility’s needs perfectly. Working together, these partners can revolutionize refractory maintenance in vertical vessels.
Mr. Jayesh Vadukiya, M.D, New Age Construction Equipment Engineering Company
New Age Construction Equipment Engineering Company is one of the leading manufacturers of construction equipment like Rope Suspended Working Platforms (Gondolas/ Cradles), Bar Bending Machines, Bar Cutting Machines, etc. The company is strictly complying with ISO 9001:2008 certification and its products have also received CE certificates. The stringent quality standards conforming to “OE” standards enable it to guarantee 100% satisfaction for the entire range of products.
New Age believes in innovation, technology, and customization of its products, based on market research and end-users’ expectations, and has a strong sales & service team of professionals. The company has many instances of innovation and customization, especially of its Rope Suspended Platforms (RSP) / Gondolas/ Cradles. Presenting here two success stories on customized RSP for Dam & Silo Project.
The job was to clean the wall of the dam. It was a very difficult job because of the wind pressure and the height of the wall. The width of the road on the dam was too short to fix a standard upper mechanism of RSP. Another problem was the customer’s requirement of designing the upper mechanism in such a way that vehicles should also pass through the upper mechanism and their movement should not be stopped during the cleaning.
Moreover, the upper mechanism was so heavy that it was next to impossible to shift it. The customer wanted to move the wall machine (upper mechanism + cradle) from one place to another in a short time, and we did that without the help of any laborers.
We designed the RSP in such a way that the client’s requirement was fulfilled, and work was done timely. We had also provided specially designed Motorized device for shifting of wall machine without any requirement of labor. With our vast experience of doing challenging projects, we are always ready to take new assignments and try to resolve all issues through our customized solutions.
|
|
|
Simulation of tin penetration in the float glass process |
Posted by: zjjsw25ss - 08-23-2021, 06:55 AM - Forum: Welcomes and Introductions
- No Replies
|
|
The flat glass produced by the float glass process has a tin-rich surface due to the contact with molten tin. The penetration of tin into the glass surface is assumed to involve coupled diffusion of stannous (Sn2+) and stannic (Sn4+) ions. The diffusion coefficients of these ions were calculated using the modified Stocks–Einstein relation with the oxidation velocity of stannous ions depending on the oxygen activity in the glass. The ion diffusion was analyzed using a coupled diffusion simulation with a modified diffusion coefficient to compensate for the negative effect of the glass ribbon’s stretching or compressing in the glass forming process. Tin penetration simulations for both green glass and clear glass show an internal local tin concentration maximum in green glass which is quite different from that in clear glass. The local maximum in the profile is associated with the accumulation of stannic ions where the greatest oxygen activity gradient occurs. Since more float time is needed in the manufacture of thicker glass plate, the tin penetrates to a greater depth with the maximum deeper in the glass and the size of the maximum larger for thicker glass.
The float glass process, which was originally developed by Pilkington Brothers in 1959 (Haldimann et al., 2008), is the most common manufacturing process of flat glass sheets. More than 80–85% of the global production of float glass is used in the construction industry (Glass for Europe, 2015a). In the float glass process, the ingredients (silica, lime, soda, etc.) are first blended with cullet (recycled broken glass) and then heated in a furnace to around 1600°C to form molten glass. The molten glass is then fed onto the top of a molten tin bath. A flat glass ribbon of uniform thickness is produced by flowing molten glass on the tin bath under controlled heating. At the end of the tin bath, the glass is slowly cooled down, and is then fed into the annealing lehr for further controlled gradual cooling down. The thickness of the glass ribbon is controlled by changing the speed at which the glass ribbon moves into the annealing lehr. Typically, glass is cut to large sheets of 3 m × 6 m. Flat glass sheets of thickness 2–22 mm are commercially produced from this process. Usually, glass of thickness up to 12 mm is available in the market, and much thicker glass may be available on request. A schematic diagram of the production process of float glass is shown in Fig. 5.2.The float glass process was invented in the 1950s in response to a pressing need for an economical method to create flat glass for automotive as well as architectural applications. Existing flat glass production methods created glass with irregular surfaces; extensive grinding and polishing was needed for many applications. The float glass process involves floating a glass ribbon on a bath of molten tin and creates a smooth surface naturally. Floating is possible because the density of a typical soda-lime-silica glass (~2.3 g/cm3) is much less than that of tin (~6.5 g/cm3) at the process temperature. After cooling and annealing, glass sheets with uniform thicknesses in the ~1–25 mm range and flat surfaces are produced. The ultra clear float glass process is used to produce virtually all window glass as well as mirrors and other items that originate from flat glass. Since float glass is ordinarily soda-lime-silica, the reference temperatures and behavior of this glass are used in the discussion below.
Figure 3.48 shows the basic layout of the clear float glass line. The glass furnace is a horizontal type, as described above. For a float line, the glass furnace is typically on the order of ~150 ft long by 30 ft wide and holds around 1200 tons of glass. To achieve good chemical homogeneity, the glass is heated to ~1550–1600°C in the furnace, but is then brought to about 1100–1200°C in the forehearth. From there, the glass flows through a channel over a refractory lipstone or spout onto the tin bath. As it flows, the glass has a temperature of about 1050°C and viscosity of about 1000 Paradical dots. A device, called a tweel, meters the flow of the molten glass.Imperfections include bubbles (or ‘seeds’) that may have a number of possible sources, the most common being gas evolved during firing. Bubbles may contain crystalline materials formed during cooling of the glass that may provide clues to the origin of the bubbles. Cords are linear features within the glass that may result either from imperfectly homogenized raw materials, dissolved refractories or devitrified material. Figure 360 shows the appearance of soda–lime–silica glass that exhibits bubbles and cords. ‘Stones’ are solid crystalline substances occurring in glass that are regarded as defects. They are usually derived either from the batch material, refractories, or devitrification. Figure 361 shows the appearance of soda–lime–silica glass that contains a devitrification ‘stone’. These may develop as the result of incomplete mixing of the molten glass constituents and/or too low a firing temperature. The ‘stone’ shown in Figure 361 contains an aggregation of tridymite crystals (see 362).
As the floating glass ribbon traverses down the length of the tin bath, its properties change dramatically. The glass enters as a viscous liquid and exits virtually a solid at a temperature very close to its glass transition temperature. The details of how the temperature changes and the viscosity builds are complicated. On one side, the free surface of the glass is exposed the atmosphere; heat can leave this surface by radiation or convection. Cooling and heating apparatuses are stationed above the glass ribbon down the length of the bath to allow adjustment of the ribbon temperature. On the other side, the glass is in contact with the tin bath, which can absorb some of the heat and transport it away from the ribbon. The tin bath is in constant motion due to the moving glass above it as well as the thermal convection currents. Unfortunately, no simple approximations can be made to make the modeling of the heat transfer.
The thickness of the tinted float glass sheet is adjusted by controlling flow onto the tin bath as well as by tension exerted along the length of the bath by rollers in the annealing lehr and sometimes by rollers in the bath unit itself. In the Pilkington design, the melt enters the bath and spreads out laterally to a thickness near the equilibrium value. If a sheet thicker than the equilibrium is required, then this spreading is constrained with physical barriers. If a sheet thinner than equilibrium is needed. then the glass ribbon is pulled in tension by rollers. In the PPG design, thickness is regulated by the tweel position and by tension from rollers in the lehr. The thermal profile allows the thinning deformation to take place effectively. A short distance away from the entry point, the temperature of the ribbon drops and the viscosity rises. Overhead coolers help this process. The glass viscosity is high enough so that knurled rollers contact the glass ribbon and pull it forward (and in some operations, laterally as well). Heaters are placed shortly downstream of these edge rollers to raise the temperature of the ribbon and create a deformable zone. This zone is followed by coolers that again lower the temperature and raise the viscosity. At exit from the lehr, the ribbon is virtually solid. The main deformation is due to the rollers in the lehr, which pull on the glass ribbon from the lehr to the edge rollers; extension takes place in the deformation zone. Example 3.15 considers the exit velocity of glass from the process.
For many years, however, the glass industry has been trying to solve a problem which affects almost every building in the world. How do you maintain the fundamental characteristics of glass, such as optical clarity and external esthetics without constant and costly maintenance? Whether the building is for commercial or residential use, the one constant requirement is for regular cleaning to be undertaken to ensure the glass maintains its optimum appearance.
The challenge for the glass industry is increased as a result of architects finding ever more resourceful and novel uses for glass. The use of glass in atria and overhead glazing can sometimes result in complex areas, which can make maintenance more difficult.
In addition to the esthetic issues it is a well-known phenomenon that if glass is not cleaned regularly then over a period of time the glass can weather, which makes it almost impossible to restore its esthetic properties. In extreme circumstances this can lead to the glass needing replacement.
The process of cleaning windows can also lead to safety and environmental issues. Window cleaning generally involves the use of portable ladders for cleaning windows on ground, first, and second floors. Figures for accidents reported to the Health and Safety Executive (HSE) and local authorities reveal that unfortunately between two and seven window cleaners have been killed every year in Great Britain and around 20–30 suffer major injuries due to falls involving ladders. From an environmental aspect window cleaning can involve the use of harsh chemicals. These are often washed off during the cleaning process and can ultimately lead to ground contamination.
Recently, self-cleaning coatings have been developed, which are designed to reduce the amount of maintenance required by working with the forces of nature to clean dirt from the glass. These coatings are based on a well-known metal oxide called titanium dioxide, which is regularly used in paints, toothpaste, and sunscreens.
Tin is an ideal bath material because it has the right set of physical properties. Tin melts at 232°C, has relatively low volatility, and does not boil until over 2000°C. Molten tin is denser than molten glass and is not miscible or reactive with molten glass. The gas atmosphere is controlled so that tin does not oxidize at a fast rate. Any oxide that does form is collected in a dross container on the bath.
Regulating the flow of the wired glass is important at this stage, both from the entry point and the lateral flow. The glass flow onto the tin bath is regulated by a gate, called a tweel, which is located in the canal between the forehearth and spout. The glass flows down the spout or lipstone onto the tin surface. There is some pressure driving this flow through the gap of the tweel. See Example 3.14. As the glass flows onto the tin bath, the thickness of the glass sheet depends on how that flow is controlled laterally and along the length of the bath. The first step to understanding thickness control is to examine the equilibrium thickness.
|
|
|
|