The University of Massachusetts Amherst
Categories
Operating System

Intel, ARM, and the Future of the Mac

For years, there have been rumors that Apple wants to move away from Intel and x86 processors to something that they design in house. This desire comes from a combination of Intel’s slowing pace and the rapid improvement of Apple’s own A-series chips that the company uses in the iPhone and iPad. Moving to a new CPU architecture is not without it’s challenges, and it would not be the first one that Apple has undertaken. The last major change was from PowerPC to Intel in 2005. That transition was made due to the lack of innovation from IBM. Intel’s future roadmap had much more powerful chips than what IBM was offering. IBM was slowly moving their product line to be more server oriented. They were already having issues meeting the power demands that Apple was trying to achieve.

Much of that same situation is happening now with Intel and ARM processors. For the last several generations, Intel’s improvements have been aimed at power efficiency increases. Many PC owners haven’t had a reason to upgrade their Sandy Bridge CPUs to the latest generation. Intel’s latest generation chips, Kaby Lake, is based on the same architecture as two generations ago. Kaby Lake is the second “iterative” step for the same process architecture. This is mostly due to Intel’s problems with being able to produce 10nm chips(their current chips are based on a 14nm process). Intel has not delivered the increased power that many Mac users have been craving, especially for their pro desktops.

On the other hand, Apple has been one of the leading innovators in ARM processor design. ARM holdings designs and produces the basic architecture design. It then licenses these designs to companies such as Apple, Samsung, and Qualcomm to manufacture their own systems on a chip (SOC). While these chips are not x86, they are much more power efficient and require less transistors. ARM chips are getting to the point where they are almost as powerful as some Intel chips. For example, the iPad Pro benchmarks higher than the 12” Macbook for both single core and multi-core tests. It would totally be possible to produce a high power ARM processor that would replace the Intel chips that Apple uses. With the slow progress that Intel has had, its not a matter of if, but rather when.

Rumors are saying that Apple has already ported macOS from x86 to ARM internally. This rumor has also stated that the new version of macOS meant for ARM chips has many similarities to iOS. While the pros and cons of this are up for debate, its easy to predict from past macOS updates that this is where the platform is going. A switch to ARM would mean that app developers would have to do some work to update their apps, as x86 applications will not natively run on ARM chips. But Apple has made similar transitions from PowerPC to Intel. In that case, the pros and cons were very similar to what they are now, and overall the market was very happy with the switch. Would you be happy with a switch to ARM chips if that meant a faster and lighter machine for you?

Categories
Operating System

JLab Audio Epic2

After the purchase of my iPhone 7 and the tragic loss of my ability to use my Audiotechnicas’s m50xs, I decided that it was time to go wireless. The use of the 3.5 mm headphone jack to lightning was not something that I wanted to use; it would be too easy to lose and just looks silly. I wanted to get good all around earbuds that I could use while studying, biking and walking around campus, working out at the Rec and going on runs. Some of the potential candidates were the Beats Powerbeats3 Wireless,  JLab’s Audio Epic2, Bose SoundSport Wireless and Apple Airpods.

Powerbeats3 currently going for $149.99 on Amazon and $199.99 from Apple. The Powerbeats have a cable that connects the two  I also didn’t want to get such a lengthy cable, but they’re very good for exercise and have a long battery life of 12 hours. They also have a remote and microphone support to take calls.

epic-2-blue-with-earbuds

JLab’s Audio Epic2 had a more modest price tag at $99.99. They have a cable that connects the two wrap-around inear earbuds and also boast a 12 hour battery life. I’ve enjoyed using the Epic2’s over the last several weeks since my purchase. The wireless earbuds come with seven different size and form factor plastic in-ear pieces so they fit comfortably and the wires wrap around the ear for a light and well secured fit.

My one complaint with the ear pieces is that they insulate from outside noise almost too well so they have to be pushed out whenever you want to have a conversation with anyone or purchase a coffee on your commute to classes.

The JLab Audio Epic2s also perform admirably as wireless fitness earbuds. They’re loud and rarely need to be turned up to the max even in a noisy gym setting. They also feel light and well secured and don’t shift with movement which makes them an ideal running or exercise choice for music on the go. They’re also protected against damage from sweat or splashes so you don’t have to worry about short circuiting them; which was a concern by some of the reviews I read on Amazon.

All in all I would say that these are a solid purchase for wireless earbuds. They come at a low price in comparison to their competition, and although they don’t look as flashy as Powerbeats, they perform just as well with the same battery life. I would strongly recommend these to anyone who is looking to go wireless or made the decision of purchasing an iPhone 7.

Categories
Operating System

Technology is Taking Our Jobs

Today April 13, 2017, Elon Musk sent out a tweet stating that his Tesla company plans on releasing their plans for a Semi-Truck line in September. Tesla is the same company that produces electric automated cars. The fact that Tesla is making semi-trucks in itself not important news, it is more about the repercussions that come because of it.  When I first saw the tweet it made me think of the semi-trucks from the Logan film that was recently in theaters. They were automated without anyone driving, basically this would be what Elon Musk and company are striving to achieve, that took movie took place in 2029 and it could be a reality by then as well. The problem that not just the U.S but the rest of the World will face is another industry that is taken over by machines and a loss of millions of jobs. According to Alltrucking.com the U.S has 3.5 Million Truckers, and actually are looking for more. This touches on a larger issue in our society today, more industries are becoming mechanized. With more industries no longer using the same number of humans to create labor, this creates a labor crisis, it’s why Donald Trump was elected he promised to bring jobs to America, but didn’t realize the real problem is not jobs leaving for other countries but our increasing technological advancements. This isn’t just a Trump issue but a problem has been with every leader in the world. How do we create jobs via the government or to get other businesses create jobs and industries that can’t be taken over by computer systems?

If it is not possible for us to make the jobs required, then we must come up with subsidies and an allowance for those people that cannot acquire a job. The Trucking industry maybe the next industry to go, but it won’t be the last and might not even have the most impact. The Oil industry is also an industry that won’t last and it supplies 8.5 million jobs and it will depend on what the governments of the world replace it with if the economy will be able to handle the massive hit.

Categories
Linux

Why Making the Jump to Linux May be for you

Image result for linux

Do you feel that Windows no longer respects your privacy? Or do you feel that Macs are too expensive? Linux might be just right for you then! Linux is an open source operating system. Although it has been around for some time now, it is slowly gaining more popularity. While Linux is often seen as the geeky computer nerd operating system, it can be perfect for average users too. Linux is all about allowing user customization and giving fine system control to the user.

Linux is Completely Free!

One of the greatest things about Linux is that it is completely free. Unlike Windows or macOS, you don’t need to pay anything in order to use it. As the latest version of Windows or macOS slowly becomes old, you will eventually need to upgrade them. Sometimes this means purchasing new licensing, which can be a unneeded financial hit. If you have the hardware, you can simply find a distribution you like, and install it. Whether this is for one machine, or 1000 machines, Linux will never bother you for a license key.

A Tweaker’s Dream

Image result for linux tweaks

Linux is the dream operating system for someone that enjoys playing around with settings to fine tune their machine. Linux offers multiple desktop environments which completely change how desktop behavior is handled. Each of these have hundreds, or possibly thousands of settings so that a user can make their experience exactly how they envision it. This is contrary to Windows and macOS, which consists one one desktop with fairly limited customization options. Almost everything in Linux has a right click menu which allows for further customization. For the extremely motivated tweakers, there are also configuration files which allow you to modify almost anything on your system. A personal favorite tweak is a use of universal keyboard shortcuts. As an avid user of terminal, I’m able to launch terminal from anywhere with a single touch of a button.

Gaining a Better Knowledge of Computers

Image result for linux terminalLinux features a terminal similar to macOS. Mastering the terminal allows you to tell a computer what you really want it to do. With terminal, you no longer have to rely on menus and clicking. Linux is an excellent choice to learn terminal commands because you will easily learn how to use it whether you need to fix something, or just due to the ease of access.

By using Linux, every user becomes aware of file permissions, and how they work. Users also become adept at using commands like top and ps aux to understand how processes work. Linux users also often learn to use commands like rsync to create backups. Finally, many users that delve a little deeper into Linux also learn about computer architecture, such as how operating systems work, and how storage devices are mounted.

Linux Has Some Amazing Software

Image result for linux beautiful software

While Linux has a reputation for being incompatible with certain software, it also offers an enormous repository of software for its users. Many major programs such as web browsers like Google Chrome or Firefox are also available for Linux. Additionally, many programs have Linux alternatives that work just as well, or even better. Better yet, software on Linux is completely free too. You can get incredibly good productive software like LibreOffice for creating documents, and Okular for viewing pdf files.

Linux is Efficient

Linux fits on small systems and large systems. It works on slow computers and fast ones too. Linux is engineered by efficiency-obsessed engineers that want to get every ounce of computing power out of their machines. Most flavors of Linux are designed to be lighter weight than their Windows or macOS counterparts. Linux also offers excellent utilization of computer hardware, as the operating system is built to efficiently handle resource management.

The storage architecture of Linux is built in a way where any dependency for a program never needs to be installed twice. All programs have access to any dependency that is already installed. However, in Windows, every program that you install needs to have all of its dependencies packaged with it. This often leads to programs having the same exact software packaged together and thus taking up more space on the harddrive.

Hardware Just Works

Perhaps you have an older laptop, or maybe new cutting edge PC. A common problem for these types of hardware is a lack of drivers. Older computers often have hardware that is no longer supported by new operating systems, and new hardware occasionally plagued by buggy driver support. On popular distributions such as Ubuntu or Linux Mint, driver support for almost all hardware is provided. This is because the Linux kernel (or core) is designed to have these drivers, whereas Windows often requires them as a separate install. Additionally, Linux drivers are much more generic than Windows, which allows Linux to reach a broader spectrum of hardware, even if the driver was not designed for older or newer hardware in mind. Finally, Linux’s amazing hardware support is a product of its users. If you ever decided to dig around in the Linux kernel, you would find an enormous amount of very specific hardware drivers simply due to various Linux users over time. Unlike Linux, Windows does not have a way for an average user to create a driver for their hardware. Linux’s software and distribution model empowers users to create their own drivers if hardware is not supported.

 

Overall, Linux is a finely tuned operating system that deserves a look. With its many features, it is able to offer an experience tailor made to any user. You can reclaim control of your computer, and make it exactly the way you want!

 

Categories
Hardware Software

A Basic Guide to Digital Audio Recording

The Digital Domain

iab-digital-audio-committee-does-dallas-4

Since the dawn of time, humans have been attempting to record music.  For the vast majority of human history, this has been really really difficult.  Early cracks at getting music out of the hands of the musician involved mechanically triggered pianos whose instructions for what to play were imprinted onto long scrolls of paper.  These player pianos were difficult to manufacture (this was prior to the industrial revolution) and not really viable for casual music listening.  There was also the all-important phonograph, which recorded sound itself mechanically onto the surface of a wax cylinder.

If it sounds like the aforementioned techniques were difficult to use and manipulate, it was!  Hardly anyone owned a phonograph since they were expensive, recordings were hard to come by, and they really didn’t sound all that great.  Without microphones or any kind of amplification, bits of dust and debris which ended up on these phonograph records could completely obscure the original recording behind a wall of noise.

Humanity had a short stint with recording sound as electromagnetic impulses on magnetic tape.  This proved to be one of the best ways to reproduce sound (and do some other cool and important things too).  Tape was easy to manufacture, came in all different shapes and sizes, and offered a whole universe of flexibility for how sound could be recorded onto it.  Since tape recorded an electrical signal, carefully crafted microphones could be used to capture sounds with impeccable detail and loudspeakers could be used to play back the recorded sound at considerable volumes.  Also at play were some techniques engineers developed to reduce the amount of noise recorded onto tape, allowing the music to be front and center atop a thin floor of noise humming away in the background.  Finally, tape offered the ability to record multiple different sounds side-by-side and play them back at the same time.  These side-by-side sounds came to be known as ‘tracks’ and allowed for stereophonic sound reproduction.

Tape was not without its problems though.  Cheap tape would distort and sound poor.  Additionally, tape would deteriorate over time and fall apart, leaving many original recordings completely unlistenable.  Shining bright on the horizon in the late 1970s was digital recording.  This new format allowed for low-noise, low cost, and long-lasting recordings.  The first pop music record to be recorded digitally was Ry Cooder’s, Bop till you Drop in 1979.  Digital had a crisp and clean sound that was rivaled only by the best of tape recording.  Digital also allowed for near-zero degradation of sound quality once something was recorded.

Fast-forward to today.  After 38 years of Moore’s law, digital recording has become cheap and simple.  Small audio recorders are available at low cost with hours and hours of storage for recording.  Also available are more hefty audio interfaces which offer studio-quality sound recording and reproduction to any home recording enthusiast.

 

Basic Components: What you Need

Depending on what you are trying to record, your needs may vary from the standard recording setup.  For most users interested in laying down some tracks, you will need the following.

Audio Interface (and Preamplifier): this component is arguably the most important as it connects everything together.  The audio interface contains both analog-to-digital converters and a digital-to-analog convert; these allow it to both turn sound into the language of your computer for recording, and turn the language of your computer back into sound for playback.  These magical little boxes come in many shapes and sizes; I will discus these in a later section, just be patient.

Digital Audio Workstation (DAW) Software: this software will allow your computer to communicate with the audio interface.  Depending on what operating system you have running on your computer, there may be hundreds of DAW software packages available.  DAWs vary greatly in complexity, usability, and special features; all will allow you the basic feature of recording digital audio from an audio interface.

Microphone: perhaps the most obvious element of a recording setup, the microphone is one of the most exciting choices you can make when setting up a recording rig.  Microphones, like interfaces and DAWs, come in all shapes a sizes.  Depending on what sound you are looking for, some microphones may be more useful than others.  We will delve into this momentarily.

Monitors (and Amplifier): once you have set everything up, you will need a way to hear what you are recording.  Monitors allow you to do this.  In theory, you can use any speaker or headphone as a monitor.  However, some speakers and headphones offer more faithful reproduction of sound without excessive bass and can be better for hearing the detail in your sound.

 

Audio Interface: the Art of Conversion

Two channel USB audio interface.
Two channel USB audio interface.

The audio interface can be one of the most intimidating elements of recording.  The interface contains the circuitry to amplify the signal from a microphone or instrument, convert that signal into digital information, and then convert that information back to an analog sound signal for listening on headphones or monitors.

Interfaces come in many shapes and sizes but all do similar work.  These days, most interfaces offer multiple channels of recording at one time and can record in uncompressed CD-audio quality or better.

Once you step into the realm of digital audio recording, you may be surprised to find a lack of mp3 files.  Turns out, mp3 is a very special kind of digital audio format and cannot be recorded to directly; mp3 can only be created from existing audio files in non-compressed formats.

You may be asking yourself, what does it mean for audio to be compressed?  As an electrical engineer, it may be hard for me to explain this in a way that humans can understand, but I will try my best.  Audio takes up a lot of space.  Your average iPhone or Android device maybe has 32 GB of space but most people can keep thousands of songs on their device.  This is done using compression.  Compression is the computer’s way of listening to a piece of music, and removing all the bits and pieces that most people wont notice.  Soft and infrequent noises, like the sound of a guitarist’s fingers scraping a string, are removed while louder sounds, like the sound of the guitar, are left in.  This is done using the Fourier Transform and a bunch of complicated mathematical algorithms that I don’t expect anyone reading this to care about.

When audio is uncompressed, a few things are true: it takes up a lot of space, it is easy to manipulate with digital effects, and it often sounds very, very good.  Examples of uncompressed audio formats are: .wav on Windows, .aif and .aiff on Macintosh, and .flac for all the free people of the Internet.  Uncompressed audio comes in many different forms but all have two numbers which describe their sound quality: ‘word length’ or ‘bit depth’ and ‘sample rate.’

The information for digital audio is contained in a bunch of numbers which indicate the loudness or volume of the sound at a specific time.  The sample rate tells you how many times per second the loudness value is captured.  This number needs to be at least two times higher than the highest audible frequency, otherwise the computer will perceive high frequencies as being lower than they actually are.  This is because of the Shannon Nyquist Theorem which I, again, don’t expect most of you to want to read about.  Most audio is captured at 44.1 kHz, making the highest frequency it can capture 22.05 kHz, which is comfortably above the limits of human hearing.

The word length tells you how many numbers can be used to represent different volumes of loudness.  The number of different values for loudness can be up to 2^word length.  CDs represent audio with a word length of 16 bits, allowing for 65536 different values for loudness.  Most audio interfaces are capable of recording audio with a 24-bit word length, allowing for exquisite detail.  There are some newer systems which allow for recording with a 32-bit word length but these are, for the majority part, not available at low-cost to consumers.

I would like to add a quick word about USB.  There is a stigma, in the business, against USB audio interfaces.  Many interfaces employ connectors with higher bandwidth, like FireWire and Thunderbolt, and charge a premium for it.  It may seem logical, faster connection, better quality audio.  Hear this now: no audio interface will ever be sold which has a connector that is too slow for the quality audio it can record.  This is to say, USB can handle 24-bit audio with a 96 kHz sample rate, no problem.  If you notice latency in your system, it is from the digital-to-analog and analog-to-digital converters as well as the speed of your computer; latency in your recording setup has nothing to do with what connector your interface uses.  It may seem like I am beating a dead horse here, but many people think this and it’s completely false.

One last thing before we move on to the DAW, I mentioned earlier that frequencies above half the recording sample rate will be perceived, by your computer, as lower frequencies.  These lower frequencies can show up in your recording and can cause distortion.  This phenomena has a name and it’s called aliasing.  Aliasing doesn’t just happen with audible frequencies, it can happen with super-sonic sound too.  For this reason, it is often advantageous to record at higher sample rates to avoid having these higher frequencies perceived within the audible range.  Most audio interfaces allow for recording 24-bit audio with a 96 kHz sample rate.  Unless you’re worried about taking up too much space, this format sounds excellent and offers the most flexibility and sonic detail.

 

Digital Audio Workstation: all Out on the Table

Apple's pro DAW software: Logic Pro X
Apple’s pro DAW software: Logic Pro X

The digital audio workstation, or DAW for short, is perhaps the most flexible element of your home-studio.  There are many many many DAW software packages out there, ranging in price and features.  For those of you looking to just get into audio recording, Audacity is a great DAW to start with.  This software is free and simple.  It offers many built-in effects and can handle the full recording capability of any audio interface which is to say, if you record something well on this simple and free software, it will sound mighty good.

Here’s the catch with many free or lower-level DAWs like Audacity or Apple’s Garage Band: they do not allow for non-destructive editing of your audio.  This is a fancy way of saying that once you make a change to your recorded audio, you might not be able to un-make it.  Higher-end DAWs like Logic Pro and Pro Tools will allow you to make all the changes you want without permanently altering your audio.  This allows you to play around a lot more with your sound after its recorded.  More expensive DAWs also tend to come with a better-sounding set of built-in effects.  This is most noticeable with more subtle effects like reverb.

There are so many DAWs out there that it is hard to pick out a best one.  Personally, I like Logic Pro, but that’s just preference; many of the effects I use are compatible with different DAWs so I suppose I’m mostly just used to the user-interface.  My recommendation is to shop around until something catches your eye.

 

The Microphone: the Perfect Listener

Studio condenser and ribbon microphones.
Studio condenser and ribbon microphones.

The microphone, for many people, is the most fun part of recording!  They come in many shapes and sizes and color your sound more than any other component in your setup.  Two different microphones can occupy polar opposites in the sonic spectrum.

There are two common types of microphones out there: condenser and dynamic microphones.  I can get carried away with physics sometimes so I will try not to write too much about this particular topic.

Condenser microphones are a more recent invention and offer the best sound quality of any microphone.  They employ a charged parallel plate capacitor to measure vibrations in the air.  This a fancy way of saying that the element in the microphone which ‘hears’ the sound is extremely light and can move freely even when motivated by extremely quiet sounds.

Because of the nature of their design, condenser microphones require a small amplifier circuit built-into the microphone.  Most new condenser microphones use a transistor-based circuit in their internal amplifier but older condenser mics employed internal vacuum-tube amplifiers; these tube microphones are among some of the clearest and most detailed sounding microphones ever made.

Dynamic microphones, like condenser microphones, also come in two varieties, both emerging from different eras.  The ribbon microphone is the earlier of the two and observes sound with a thin metal ribbon suspended in a magnetic field.  These ribbon microphones are fragile but offer a warm yet detailed quality-of-sound.

The more common vibrating-coil dynamic microphone is the most durable and is used most often for live performance.  The prevalence of the vibrating-coil microphone means that the vibrating-coil is often dropped from the name (sometimes the dynamic is also dropped from the name too); when you use the term dynamic mic, most people will assume you are referring to the vibrating-coil microphone.

With the wonders of globalization, all microphones can be purchase at similar costs.  Though there is usually a small premium to purchase condenser microphones over dynamic mics, costs can remain comfortably around $100-150 for studio-quality recording mics.  This means you can use many brushes to paint your sonic picture.  Often times, dynamic microphones are used for louder instruments like snare and bass drums, guitar amplifiers, and louder vocalists.  Condenser microphones are more often used for detailed sounds like stringed instruments, cymbals, and breathier vocals.

Monitors: can You Hear It?

Studio monitors at Electrical Audio Studios, Chicago
Studio monitors at Electrical Audio Studios, Chicago

When recording, it is important to be able to hear the sound that your system is hearing.  Most people don’t think about it, but there are many kinds of monitors out there: the screen on our phones and computers which allow us to see what the computer is doing, to the viewfinder on a camera which allows us to see what the camera sees.  Sound monitors are just as important.

Good monitors will reproduce sound as neutrally as possible and will only distort at very very high volumes.  These two characteristics are important for monitoring as you record, and hearing things carefully as you mix.  Mix?

Once you have recorded your sound, you may want to change it in your DAW.  Unfortunately, the computer can’t always guess what you want your effects to sound like, so you’ll need to make changes to settings and listen.  This could be as simple as changing the volume of one recorded track or it could be as complicated as correcting an offset in phase of two recorded tracks.  The art of changing the sound of your recorded tracks is called mixing.

If you are using speakers as monitors, make sure they don’t have ridiculously loud bass, like most speakers do.  Mixing should be done without the extra bass; otherwise, someone playing back your track on ‘normal’ speakers will be underwhelmed by a thinner sound.  Sonically neutral speakers make it very easy to hear what you finished product will sound like on any system.

It’s a bit harder to do this with headphones as their proximity to your ears makes the bass more intense.  I personally like mixing on headphones because the closeness to my ear allows me to hear detail better.  If you are to mix with headphones, your headphones must have open-back speakers in them.  This means that there is no plastic shell around the back of the headphone.  With no set volume of air behind the speaker, open-back headphones can effortlessly reproduce detail, even at lower volumes.

closed-vs-open-back-headphones  1

Monitors aren’t just necessary for mixing, they also help to hear what you’re recording as you record it.  Remember when I was talking about the number of different loudnesses you can have for 16-bit and 24-bit audio?  Well, when you make a sound louder than the loudest volume you can record, you get digital distortion.  Digital distortion does not sound like Jimi Hendrix, it does not sound like Metallica, it sounds abrasive and harsh.  Digital distortion, unless you are creating some post-modern masterpiece, should be avoided at all costs.  Monitors, as well as the volume meters in your DAW, allow you to avoid this.  A good rule of thumb is: if it sounds like it’s distorting, it’s distorting.  Sometimes you won’t hear the distortion in your monitors, this is where the little loudness bars on your DAW software come in; those bad boys should never hit the top.

 

A Quick Word about Formats before we Finish

These days, most music ends up as an mp3.  Convenience is important so mp3 does have its place.  Most higher-end DAWs will allow you to make mp3 files upon export.  My advise to any of your learning sound-engineers out there is to just play around with formatting. However, a basic outline of some common formats may be useful…

24-bit, 96 kHz: This is best format most systems can record to.  Because of large files sizes, audio in this format rarely leaves the DAW.  Audio of this quality is best for editing, mixing, and converting to analog formats like tape or vinyl.

16-bit, 44.1 kHz: This is the format used for CDs.  This format maintains about half of the information that you can record on most systems, but it is optimized for playback by CD players and other similar devices.  Its file-size also allows for about 80 minutes of audio to fit on a typical CD.  Herein lies the balance between excellent sound quality, and file-size.

mp3, 256 kb/s: Looks a bit different, right?  The quality of mp3 is measured in kb/s.  The higher this number, the less compressed the file is and the more space it will occupy.  iTunes uses mp3 at 256 kb/s, Spotify probably uses something closer to 128 kb/s to better support streaming.  You can go as high as 320 kb/s with mp3.  Either way, mp3 compression is always lossy so you will never get an mp3 to sound quite as good as an uncompressed audio file.

 

In Conclusion

Recording audio is one of the most fun hobbies one can adopt.  Like all new things, recording can be difficult when you first start out but will become more and more fulfilling over time.  One can create their own orchestras at home now; a feat which would have been near impossible 20 years ago.  The world has many amazing sounds and it is up to people messing around with microphone in bedrooms and closets to create more.

Categories
Operating System

IOT: Connecting all our stuff to the network of networks

What is the IOT?

The Internet of Things (IOT for short) is the common term for devices that have become integrated with “smart” or internet connectable technologies that use the global infrastructure of the Internet to bring both accessibility and highly improved product experiences to millions of users of common electronics. In this article I’ll be discussing some implications that IOT has on the landscape of the Internet, as well as some IOT devices that have become commonplace in many homes across the nation.

Some things to note about IOT

Many IOT devices offer very promising integrations with online services that make their usefulness indispensable, however, this usefulness can come at the cost of security so it’s always good to understand the implications of adding an IOT device to a network. A most notable event that underscores the importance of securing these connected devices was the Mirai Botnet attack carried out on DynDNS on Friday Oct 21 2016, relevant article here.

Some of the Things:

Amazon Echo

A smarthome hub created by Amazon with the ability to integrate with various devices and services to command and control your smart home and allow for easier access to informational resources. The Alexa service provides an easy to use interface for interacting with various services via speech, a query to Alexa can perform web searches, interact with online services, as well as control some of the devices in this article. More information can be found here.

Google Home

Google’s equivalent to Amazon’s Echo, recently released as of November 2016, the Google Home is able to integrate with about the same amount of services as the Echo, as well as integrates more directly with the Google smart home ecosystem. The ability to stream directly to a Google Chromecast device connected to the same network as the Home is one of it’s notable features.

 

Nest Product Line: Cam, Thermostat, Protect

These smart products aim to keep your home automated yet safe, the Cam is a webcam that is accessible via the internet, has the ability to perform speakerphone functions. the Thermostat is a remotely controllable thermostat that adjusts based on user presence in the home. The Protect is a smoke-detector with internet connectivity that can perform remote alertive actions as well as speaks based on the location of the source of the smoke.

Smart Lighting Products: Phillips Hue, GE Link, LIFX

Smart lighting affords users the ability to customize lighting based on their location data, as well as by time of day. Being able to remotely turn on and off lighting also affords users some peace of mind in being able to determine whether they forgot to turn of the lights before leaving the house. These products typically connect to a Zigbee based hub, which can be used with all Zigbee compatible devices.

Smart Appliances: Coffeemaker, Dishwasher, Clothes Washer and Dryer

Various smart appliances allow for remotely starting, stopping, and manually controlling settings individualized settings.

 

Smart plugs: TP-Link Smart Plug

The smart plug allows for remotely turning on and off a device that is connected to the socket. This type of smart device allows extending remote capabilities to anything that uses a standard power socket.

Smart wearables: Apple Watch, Android Wear, Tizen and Pebble

These devices allow for data to be gathered from our person, heart rate/fitness information, location based information, and remote notifications are some of the data that can be gathered on these devices for display to the user.

Be sure to secure your things, as the data they collect and create become increasingly more critical the more integrated into our lives they become.

 

 

Categories
Operating System

Engine Management: How Computers Unlocked the Internal Combustion Engine

Introduction

How did engines run before computers?

The internal combustion engine as we know it has always required some level of electronic signal to operate the ignition system. Before the 1980s when the first engine management computer was produced,  the electrical hardware on an engine was fairly rudimentary, boiling down to essentially a series of off and on switches for ignition timing. This is what’s referred to as mechanical ignition.

Mechanical ignition works by sending a charge from a battery to an ignition coil, which essentially stores a high voltage charge that discharges when provided with a path. This path is determined by a distributor is mechanically connected to the crankshaft of the engine. A distributor’s job is just as its name suggests – the rotation of the crankshaft causes the distributor to rotate, connecting the ignition coil to the individual spark plugs for each cylinder to ignite the mixture at the right time in the engine’s cycle to produce power.

Of course there are more complexities to how an engine produces power, involving vacuum lines and the workings of a carburetor and mechanical fuel pumps, however for this article we’re going to focus on electronics.

The First Computers Designed for Engines:

Electronic Fuel Injection, or EFI, has been around since the 1950s however before the mid 1970s was primarily used in motorsport due to its higher cost compared to a carburetor. Japanese companies such as Nissan were pioneers in early consumer EFI systems. The advantages of EFI over carburetors include better startup in cold conditions, as well as massively increased fuel economy. Then in 1980, Motorola introduced the first engine control unit, ECU, that would begin the computer takeover of the car industry.

An ECU replaces the direct mechanical connections with sensors that each read data from different parts of the engine, and feed back to ECU which crunches numbers and then determines how to adjust the various components of the engine to make sure it is operating within predetermined limits An Oxygen Sensor, or O2 sensor, is possibly one of the most important parts of a modern engine – connecting to the exhaust, the O2 sensor reads the levels of oxygen present after combustion. This is extremely important as it tells the ECU information on how efficient the engine is currently burning fuel. There are numerous other sensors on engines, but their jobs are all under the same umbrella: to feed information back to the ECU, so that the microprocessor can adjust timing and how much fuel is going in the engine accordingly.

Replacing the mechanically driven timing of early engines allows for a wider range of adjustability and control to ensure the engine is running right. This led cars to burn gas much cleaner and become much more efficient in general. As technology progressed, engine management became even more advanced, allowing for yet more meticulous control, as well as added safety measures. But what else did this computer-powered control do for the automotive industry?

Improvements in Performance

Engine Tuning

With ever increasing processing power, the computers in cars advanced just as quickly as any other computers: exponentially. More efficiently controlling fuel and timing quickly led to tuning for maximum power and response. EFI, and direct injection increased the throttle response, and further tuning could be done to make the car have a wider powerband – a term used to refer to the range of revolutions per minute (RPM) where an engine was making usable power. Manufacturers, realizing the extensive power of ECUs, started building mechanical parts around them to utilize their strengths. Below is a list of variable timing technologies used by several different companies:

  • Variable Valves/ Variable Cam Design
    • Honda VTEC (Variable Valve Timing and Lift Electronic Control)
    • Mitsubishi MIVEC (Mitsubishi Innovative Valve timing Electronic Control System)
    • Toyota VVT-i (Variable Valve Timing with intelligence)
    • Nissan VVL/VVT (Variable Valve Lift/ Variable Valve Timing)

While differing in name and how they are applied, these systems all boil down to controlling the engine timing at different engine speeds (RPM). The word ‘variable’ stands out in all of these, and is possibly the most powerful tool that advanced engine tuning enables. In this case, variable refers to the ability to change the behavior of the engine’s valves and camshafts (a long rod at the top of an engine that tells the valves when to move). As the engine speed increases, what might have been a good design at lower RPM soon starts to fall short, and this is what causes the powerband to drop off. Being able to alter the timing of the engine allows for better high and low end performance, as manufacturers essentially have the opportunity to design their engine for both, and use the ECU to switch modes at the optimal time.

Looking Forward

Hybrids

Most people think of hybrids as the Toyota Prius, something designed with pure efficiency in mind, however some supercar companies have taken hybrid technology and adapted it for performance. Supercars such as the McLaren P1 and Porsche 918 utilize electric engines to compliment the power of the conventional combustion engine. Managed by an advanced ECU, the electric engines are used to provide immediate power while the gas engine is accelerating into its powerband. While the electric engines can be used separately in place of the gas engine, they mainly serve to further fill in the gap that the variable timing technology we talked about previously could not. As regular hybrid technology continues to advance, we can expect to see the same with respect to response and performance.

 

While engine efficiency is still being improved, the means to do so are based on these core engine technologies and their supporting computer systems. Now, manufacturers have once again started producing supporting components to utilize the ECUs ability to process data.

 

Categories
Hardware Operating System

Hard Drives: How Do They Work?

What’s a HDD?

A Hard Disk Drive (HDD for short) is a type of storage commonly used as the primary storage system both laptop and desktop computers. It functions like any other type of digital storage device by writing bits of data and then recalling them later. It stands to mention that an HDD is what’s referred to as “non-volatile”, which simply means that it can save data without a source of power. This feature, coupled with their large storage capacity and their relatively low cost are the reasons why HDDs are used so frequently in home computers. While HDDs have come a long way from when they were first invented, the basic way that they operate has stayed the same.

How does a HDD physically store info?

Inside the casing there are a series of disk-like objects referred to as “platters”.

The CPU and motherboard use software to tell what’s called the “Read/Write Head” where to move on the platter and where it then provides an electrical charge to a “sector” on the platter. Each sector is an isolated part of the disk containing thousands of subdivisions all capable of accepting a magnetic charge. Newer HDDs have a sector size of 4096 bytes or 32768 bits; Each bit’s magnetic charge translates to a binary 1 or 0 of data. Repeat this stage and eventually you have a string of bits which when read back can give the CPU instructions, whether it be updating your operating system, or opening your saved document in Microsoft Word.

As HDDs have been developed, one key factor that has changed is the orientation of the sectors on the platter. Hard Drives were first designed for “Longitudinal Recording” – meaning the longer side of the platter is oriented horizontally – and since then have utilized a different method called “Perpendicular Recording” where the sectors are stacked on end. This change was made as hard drive manufacturers were hitting a limit on how small they could make each sector due to the “Superparamagnetic Effect.” Essentially, the superparamagnetic effect means that hard drive sectors smaller than a certain size will flip magnetic charge randomly based on temperature. This phenomenon would result in inaccurate data storage, especially given the heat that an operating hard drive emits.

One downside to Perpendicular Recording is increased sensitivity to magnetic fields and read error, creating a necessity for more accurate Read/Write arms.

How software affects how info is stored on disk:

Now that we’ve discussed the physical operation of a Hard Drive, we can look at the differences in how operating systems such as Windows, MacOS, or Linux utilize the drive. However, beforehand, it’s important we mention a common data storage issue that occurs to some degree in all of the operating systems mentioned above.

Disk Fragmentation

Disk Fragmentation occurs after a period of data being stored and updated on a disk. For example, unless an update is stored directly after a base program, there’s a good chance that something else has been stored on the disk. Therefore the update for the program will have to be placed in a different sector farther away from the core program files. Due to the physical time it takes the read/write arm to move around, fragmentation can eventually slow down your system significantly, as the arm will need to reference more and more separate parts on your disk. Most operating systems will come with a built in program designed to “Defragment” the disk, which simply rearranges the data so that all the files for one program are in once place. The process takes longer based on how fragmented the disk has become. Now we can discuss different storage protocols and how they affect fragmentation.

Windows:

Windows uses a base computer language called MS-DOS (Microsoft Disk Operating System) and a file management system called NTFS, or New Technology File System, which has been the standard for the company since 1993. When given a write instruction, an NT file system will place the information as close as possible to the beginning of the disk/platter. While this methodology is functional, it only leaves a small buffer zone in between different files, eventually causing fragmentation to occur. Due to the small size of this buffer zone, Windows tends to be the most susceptible to fragmentation.

Mac OSX:

OSX and Linux are both Unix based operating systems. However their file system are different; Mac uses the HFS+ (Hierarchical File System Plus) protocol, which replaced the hold HFS method. HFS+ differs in that it can handle a larger amount of data at a given time, being 32bit and not 16bit. Mac OSX doesn’t need a dedicated tool for defragmentation like Windows does OSX avoids the issue by not using space on the HDD that has recently been freed up – by deleting a file for example – and instead searches the disk for larger free sectors to store new data. Doing so increases the space older files will have closer to them for updates. HFS+ also has a built in tool called HFC, or Hot File adaptive Clustering, which relocates frequently accessed data to specials sectors on the disk called a “Hot Zone” in order to speed up performance. This process, however, can only take place if the drive is less than 90% full, otherwise issues in reallocation occur.  These processes coupled together make fragmentation a non-issue for Mac users.

Linux:

Linus is an open-source operating system which means that there are many different versions of it, called distributions, for different applications. The most common distributions, such as Ubuntu, use the ext4 file system. Linux has the best solution to fragmentation as it spreads out files all over the disk, giving them all plenty of room to increase in size without interfering with each other. In the event that a file needs more space, the operating system will automatically try to move files around it give it more room. Especially given the capacity of most modern hard drives, this methodology is not wasteful, and results in no fragmentation in Linux until the disk is above roughly 85% capacity.

What’s an SSD? How is it Different to a HDD?

In recent years, a new technology has become available on the consumer market which replaces HDDs and the problems they come with. Solid State Drives (SSDs) are another kind of non-volatile memory that simply store a positive charge or no charge in a tiny capacitor. As a result, SSDs are much faster than HDDs as there are no moving parts, and therefore no time to move the read/write arm around. Additionally, no moving parts increases reliability immensely. Solid state drives do have a few downsides, however. Unlike with hard drives, it is difficult to tell when a solid state is failing. Hard drives will slow down over time, or in extreme cases make audible clicking signifying the arm is hitting the platter (in which case your data is most likely gone) while solid states will simply fail without any noticeable warning. Therefore, we must rely on software such as “Samsung Magician” which ships with Samsung’s solid states. The tool works by writing and reading back a piece of data to the drive and checking how fast it is able to do this. If the time it takes to write that data falls below a certain threshold, the software will warn the user that their solid state drive is beginning to fail.

Do Solid States Fragment Too?

While the process of having data pile on top of itself, and needing to put files for one program in different place is still present, it doesn’t matter with solid states as there is no delay caused by the read/write arm of a hard drive moving back and forth between the different sectors. Fragmentation does not decrease performance the way it does with hard drives, but it does affect the life of the drive. Solid states that have scattered data can have a reduced lifespan. The way that solid states work cause the extra write cycles caused by defragmenting to decrease the overall lifespan of the drive, and is therefore avoided for the most part given its small impact. That being said a file system can still reach a point on a solid state where defragmentation is necessary. It would be logical for a  hard drive to be defragmented automatically every day or week, while a solid state might require only a few defragmentations, if any, throughout its lifetime.

Categories
Operating System

The little smart watch that could: A pebble love story

If you’ve ever wondered what the geekiest gadget is to own you may get a few different responses. Maybe its a drone, maybe its a ringtone that is an anime intro song, but for a lot of tech nerds it was the Pebble watch.

Why do gadget heads love it so much? Well, Back in 2012 Pebble did a kickstarter campaign to fund the would-be watch company. It ended up being the most funded kickstarter ever. And geeks love a good kickstarter story. It’s the nerd version of David vs. Goliath.

But we also loved the technology behind it. Pebble watches were always water resistant. The battery life was about a week. The display is a e-paper display, and tech savvy people love discussing how much they love e-paper displays. Looking at the first generation apple watch, pebble had more battery life (7x more, actually), it had swimming support, and it did it all years before anyone else did.

By far, pebble watches have the most battery life compared to other popular watches
Pebble’s starting price is half of the next best watch AND has more battery life and swimming support.
Pebble and the Apple Watch 2 are the only watches on this list with swimming support. Remember that this is the basic pebble time watch. The pebble time 2 + heart rate has even more athletic support

Pebble was the under dog that never stopped impressing.

It’s app store had 1000 applications. That’s a ton for the little smart watch that could. You could attach the time piece to your bike and it would track your speed. The pebble watch 2 with heart rate could track your sleep schedule and calories (full disclosure, I bought one of these yesterday and am currently waiting for it to come via snail mail). It vibrates when you get a text or email; and unlike the latest and greatest Fitbit Charge 2, you can respond to text messages from the watch! All while maintaining incredible battery life.

Back in 2016, pebble was bought out by fitbit. A worthy adversary. And for a company that was primarily funded via kickstarter, it was an entrepreneurs’ dream. This means that pebble is selling off all of their inventory, so get yourself a pebble watch before they go away forever. Then you too can have the geekiest gadget around.

Good bye Pebble. You were dearly loved.

Categories
Android Apps Software

Getting Familiar with Android Studio

Last time we covered the basics of Google’s official IDE for Android app development: Android Studio. You can find that article here. Now we will learn about how an Android app is structured and organized, what files interact with each other, and what they do.

Categories
Android Apps Software

Getting Started with Android Studio

Android is a great platform for a beginner developer to make his or her first smartphone app on. Android apps are written in Java, and the graphics are generally written in XML. Android apps are developed in many well-known IDEs (integrated development environments – programs that typically package together a code editor, compiler, debugger, interpreter, build system, version control system, and deployment system, as well as other tools) such as Eclipse, IntelliJ IDEA, and Android Studio. In this article we will cover the basics of Android Studio.

Android Studio logo

Categories
Operating System

Apple Expands Reach on USB-C: What’s Next for Future Devices?

The USB Type C port on the new 2015 Macbook
The USB Type C port on the new 2015 Macbook

In 2015, we saw Apple relaunch what was the complete overhaul of their former flagship laptop line before making the switch to the Pro/Air series. The new Macbook, which included many new features that were relatively brand new to the laptop world, hit the shelves with its new Intel Atom processor, butterfly keyboard, and beautiful Retina display. This product was praised for its innovation in many areas, but what took the technological world by storm wasn’t any of those features, the display, or even debuting in 3 different finishes. Yes, the big discussion were its ports, or lack thereof.

This Macbook featured a single port, a USB type C port, opposite of the 3.5mm headphone jack on the other side of the computer. This notebook, the smallest and thinnest among its family in the Macbook line, left any other port besides these two off of the case and into the world of adapters. While many were left scratching their heads, Apple was not only selling many of these devices but also was praised for product innovation and debuting the next new type of USB to its devices.

Fast forward to Fall of 2016. Apple’s newest line of Macbook Pro’s are just announced, featured, and released. Amidst the new keyboard previously seen by the lower level Macbook and even the Touch ID touch bar sitting atop the keyboard, again the question brought up is this: What about it’s ports?

The new Macbook Pro w/ USB-C ports
The new Macbook Pro w/ USB-C ports

Now that Apple has brought the USB type C port to its higher end of Laptops (all of the Macbook Pro line), what can we expect from future devices? Will Apple learn from the likes of Google and Motorola and integrate the newest port to its iPhone and iPad (and iPod?) lines?

What’s next in line for an overhaul among Apple’s core devices is in fact the Macbook Air. Praised as the perfect everyday computer, it really isn’t needed for heavy usage and professional applications, but it is perfect for the average user and student for its longevity on the battery side of things and ease of use and efficiency fit into a small form factor. Featured already are its Magsafe 2 charging port, two USB 3.0 ports and a headphone jack.

We have already seen Magsafe 2, which was the hyped-up successor to the original charging port of the older Macbook Pros/Airs in the Magsafe 1 port, is now phased out on two of three Macbook lines. What is due up next is removing this port on the Air. This would be paving the way for the Air to upgrade to the next level of innovation and include a Type-C port for charging, fitting right in with its brothers in the line-up. While introducing this next port for charging, it is also lightning fast for data, so remove the other USB ports and you got yourself a Macbook Air, with multiple Type C ports and a headphone jack, along with the improvements in display and keyboard that should come with it.

But what does this mean? Is the adapter life going to consume us for the rest of time? That answer we do not know yet, but it is worth thinking about. For Apple and many companies that should follow suit, this is a huge market to breach in customers purchasing different dongles and adapters to hang like winding branches off their laptops. For the likes of HDMI, Thunderbolt, Ethernet, and many other ports very much still necessary in this day and age, will companies phase them out and stick to adapters forever, or will Apple learn from the adapter game and start to integrate these ports back into their devices, using these next models as sort of a “testing phase”?

For now, we’ll see where this brings us for the product releases in the spring and fall of 2017, but something to know is this: USB Type C among Apple devices is here to stay, and there’s no getting around it. Maybe we will see this dominate every device from the iPhone, iPad, Macbooks, and even maybe into desktop computers.

Categories
Operating System

Amazon’s Echo and Alexa: A User’s Experience

Introduction:

Over the holiday break a new acquisition in technology took place in the Afonso household, we purchased an Amazon Echo Dot. At $50, the price seemed reasonable enough that it was worth a shot to try and get on the cutting edge of smart home technologies. Unfortunately, due to a lack of smart devices in our home, we were unable to use Alexa’s greatly touted integrations with things like the Nest Products or Zigbee based lighting products. However Alexa can be used for much more that controlling a smart home, I’ll speak to some Alexa Skills (the Echo’s version of Applications) that we tried and our experiences with them.

Built-In Functionality:

Out-of-the-box, the Echo can be easily configured for integration with a wide array of streaming media services. Built-in are Pandora, Spotify (restricted to premium accounts), iHeartRadio, and Tunein Radio. This makes a the Echo a perfect candidate for the smart radio, as it has a small speaker built-in (low fidelity mono speaker so another speaker is recommended) as well as Bluetooth connectivity to connect to larger and better audio equipment. Built-in news, weather, and sports integration which at setup time needs to be configured using the Amazon Alexa app (available for both iPhone and Android) There is also built-in smart device detection which I was unable to experiment with that detects smart devices and performs a pairing procedure that allows you to perform actions with keywords like (“Turn on, Turn off”) additional smart home skills are needed to perform in-depth control of other smart-devices.

The Skills:

These can be turned on for use with your device by stating “Alexa, enable *insert skill name here*”.

To use any skill, state “Alexa, open *insert skill name here*”.

To use a skill and pass it information, state “Alexa, ask *insert skill name here* to *insert parameter name here*”

Anymote Smart Remote:

After configuring the skill using the Anymote App for iPhone (instructions are openly available), I was able to control my Roku Smart TV device using my voice. Simply stating “Alexa, open Anymote” followed by the remote input you’d like to perform such as “Volume up”, “Home Button”, “Up Button” will interact with the device you’ve configured it with. Overall, a very useful skill for those looking to remote control any of their network connected devices.

Jeopardy J6:

This is a shortened version of the classic game show that allows the user to answer questions from a recent airing of the show. Alexa will provide responses with whether the question you provided to the corresponding answer was correct. The performance of this app was superb, and really allows for an interactive experience with Echo.

Twenty Questions:

This is another classic game that allows you to think of any object (limited to a category set) and within 20 Questions, Alexa will aim to guess what you’re thinking of. The interactivity with this app is also superb, and the shock when Alexa get’s those obscure guesses correct is pretty amazing.

Ooma Telo:

This skill is perfectly utilitarian – it allows the user to place a call via the Echo, with a caveat, the call must be completed using an existing phone line and cannot proceed via the Echo. Essentially, once you ask Ooma to place a call it will initiate a three-way call between it’s VOIP service and the phone you choose, thus it’s limited to initiating calls.

Drive Time:

This skill is the perfect companion for a commuter, since Alexa has no built-in time estimation for destinations, Drive Time allows you to ask for driving times to favorited locations that the user must configure. There is no search function, and locations must be stated before-hand in the skills settings in order to use them.

Experience Summary:

Alexa can do some really remarkable things, and at this point due to it being released about 2 years ago (2014), the Skills that have been developed allow functionality to be extended to a variety of platforms and devices. The Echo Dot does beg for a better speaker, but at the $50 price point, it’s expected, and also provides incentive for buying the Dot’s larger brother, just the standard Echo. The overall versatility of the connections on the device (3.5mm output, Bluetooth Connectivity, Wireless Connectivity, and Zigbee via Wi-Fi hub) make it perfect for controlling audio and other devices.

Categories
Hardware Software

Wearable Technology

2016 has given us a lot of exciting new technologies to experiment with and be excited for. As time goes by technology is becoming more and more integrated into our every day lives and it does not seem like we will be stopping anytime soon. Here are some highlights from the past year and some amazing things we can expect to get our hands on in the years to come.

Contact Lenses

That’s right, we’re adding electronic capabilities to the little circles in your eyes. We’ve seen Google Glass, but this goes to a whole other level. Developers are already working on making lenses that can measure your blood sugar, improve your vision and even display images directly on your eye! Imagine watching a movie that only you can see, because it’s inside your face!

Kokoon

Kokoon started out as a Kickstarter that raised over 2 million dollars to fund its sleep sensing headphones. It is the first of its kind, able to help you sleep and monitor when you have fallen asleep to adjust your audio in real time. It’s the insomnia’s dream! You can find more information on the Kokoon here: http://kokoon.io/

Nuzzle

Nuzzle is a pet collar with built in GPS tracking to keep your pet safe in case it gets lost. But it does more than that. Using the collar’s companion app, you can monitor your dogs activity and view wellness statistics. Check it out: http://hellonuzzle.com/

Hearables

Your ears are the perfect place to measure all sorts of important stuff about your body such as your temperature and heart rate. Many companies are working on earbuds that can sit in your ear and keep statistics on these things in real time. This type of technology could save lives, as it could possibly alert you about a heart attack before your heart even knows it.

Tattoos

Thought it couldn’t get crazier than electronic contacts? Think again. Companies like Chaotic Moon and New Deal Design are working on temporary tattoos than can use the electric currents on the surface of your skin to power them up and do all kinds of weird things including open doors. Whether or not these will be as painful as normal tattoos is still a mystery, but we hope not!

VR

Virtual Reality headsets have been around for a while now, but they represent the ultimate form or wearable technology. These headsets are not mainstream yet and are definitely not perfected, but we can expect to be getting access to them within the next couple of years.

Other impressive types of wearable tech have been greatly improved on this year such as smart watches and athletic clothing. We’re even seeing research done on Smart Houses, which can be controlled completely with your Smart Phone, and holographic image displays that don’t require a screen. The future of wearable technology is more exciting than ever, so get your hands on whatever you can and dress to impress!

Categories
Mac OSX Operating System

Gaming on a MacBook Pro

Despite what the average internet person will tell you, MacBooks are good at what they do. That’s something important to remember in a time where fanboying is such a prevalent issue in the tech consumer base. People seem eager to take sides; binary criticism removing the reality that machines can have both good and bad qualities. MacBooks are good at what they do, and they also have their disadvantages.

One of the things MacBooks aren’t good at (mostly due to their architecture) is playing games. If you’re looking for high-performance gameplay, Windows machines are objectively better for gaming. Despite this, there are plenty of games and workarounds that’ll still enable you to have fun with friends or in your dorm room after a long stressful day even on a MacBook.

Note: I’ll only be listing the methods and games I’ve personally found to work well. There are likely tons of games and methods that work great, but I haven’t tried yet.  While I’m aware you can always install Windows via Boot Camp, I’ll only be touching on methods and games that don’t require altering the OS or running a virtual machine. Below is a screenshot of my machine’s specs for reference.

Screen Shot 2016-11-18 at 10.37.22 AM

Actually Getting Games  

Do you like games? Do you like sales? Do you often fantasize about purchasing AAA games for prices ranging from Big Mac to Five Guys? Steam is the way to go. You can get Steam here, and I highly recommend you do. Steam is great because of its frequent sales, interface, and ability to carry over your purchases between machines easily. A good amount of Steam titles are supported on Mac OS, so if you’ve been previously using a Windows machine and have a huge library, you won’t have to repurchase all of your games if you switch to a new OS. You can also purchase some games off of the App Store, though the selection there is far smaller in comparison.

Configuration 

If you’re planning on playing an FPS on your MacBook, you’re likely going to want a mouse. A mouse is far more accurate and comfortable than a trackpad when it comes to interacting with most game interfaces. However, after plugging in your mouse you might find that it feels…weird. It accelerates and slows itself down sporadically and probably feels like it’s fighting you. No need to worry! This is a simple fix.

Screen Shot 2016-11-17 at 2.58.31 PM

First, launch Terminal and enter the following command:

defaults write .GlobalPreferences com.apple.mouse.scaling -1    

This will disable Mac OS’s built in scaling and allow you and your mouse to have healthy bonding time without it suddenly deciding to perform an interpretive dance in the style of the plastic bag from American Beauty.

Screen Shot 2016-11-17 at 3.02.36 PM

Another bonus piece of advice would be to go to System Preferences > Keyboard > and check the option to use the function keys without having to press the fn key. If you’re playing games that require usage of the function keys, you’ll find it easier to only have to hit one key vs having to take your hand off the mouse to hit two.

 

Finally, I recommend you keep your system plugged in and on a desk. Just like with most laptops, demanding processes like games can drain the battery faster than Usain Bolt can run across campus and make your laptop hotter than that fire mixtape you made in highschool.

Solo game recommendations

So, you’ve set up your mouse and keyboard, installed steam, and you’ve got some free time to play some games. What now? Well, not every game that is listed as “compatible” with Mac OS actually works well with Mac OS. Some games lag and crash, while others might run at a high frame-rate with no problems. Here are a few games I’ve found work well with my system. (Reminder: Performance may vary)

1.h a c k m u d

Screen Shot 2016-11-17 at 3.10.31 PM

“h a c k m u d” is a game that is set in a cyberpunk future where you’re a master hacker. This isn’t Watch_Dogs though. You’re not “hacking” by pressing a single button; rather, every single bit of code is typed by you. If you don’t know how to code, the game does an alright job at teaching you the basics of its own language (which is like a simplified mix of HTML and Java). The first hour of the game is spent locked in a server where you’ll have to solve some interesting logic puzzles. Once you escape the server, the game suddenly becomes a fully functional hacking MMO entirely populated by actual players. The game runs well on Mac OS, as it’s almost entirely text-based.

2.Pillars of Eternity

Screen Shot 2016-11-17 at 3.14.22 PM

Do you like classic CRPGs? If the answer is yes, you’ll probably love Pillars. It’s a CRPG that fixes a lot of the problems the genre faced during its golden age, while not losing any of its complexity and depth. The game runs well, though do expect a loud and hot system after just a few minutes.

3.SUPERHOT

Screen Shot 2016-11-17 at 3.11.59 PM

Do you often dream of being a bad-ass ninja in the matrix? SUPERHOT is a game where the central gimmick is that time only moves when you move. More accurately, time moves at a fraction of a second when you aren’t moving your character. This allows for moments where you can dodge bullets like Neo and cut them in half mid-flight with a katana. The game runs great, though your system will quickly get super hot (pun intended).

4.Enter the Gungeon

Screen Shot 2016-11-17 at 3.12.48 PM

Enter the Gungeon is a cute little rogue-like bullet hell where your goal is to reach the end of a giant procedurally generated labyrinth while surviving an endless onslaught of adorable little sentient bullets that want to murder you. The game is addictive and runs well, though one common issue I found was that the game will crash on startup unless you disable the steam overlay. It’s a shame though that you can’t enjoy the co-op feature…

…or can you?

MacBook Party 

Who wants to play alone all the time? This is college, and like a Neil Breen movie, it’s best enjoyed with friends by your side. Here’s a tutorial on how to set up your MacBook for some local gaming fun-time.

First things first, you’re going to want some friends. If you don’t have any friends installed into your life already, I find running “heystrangerwannaplaysomegameswithme.exe” usually helps.

Next, you’re going to want to get one of these. This is an adapter for Xbox 360 controllers, which you should also get a few of here. Plug in the USB adapter into your MacBook. Now, Mac OS and the adapter will stubbornly refuse to work with each other (symbolic of the fanboying thing I mentioned at the beginning of this post), so you’re going to have to teach them the value of teamwork by installing this driver software.

Once you’re all set, you should be able to wirelessly connect the controllers to the adapter and play some video games. One optional adjustment to this process would be to connect your MacBook via HDMI to a larger display so everyone can see the screen without having to huddle around your laptop.

Enter the Gungeon has a great two-player co-op mode. I’d also recommend Nidhogg and Skullgirls for some casual competitive matches between friends.

And there you have it! Despite what some very vocal individuals on the internet might tell you, it is possible to enjoy some light gaming on a Macbook. This is the part where I’d normally make some grand statement about how the haters were wrong when they said it couldn’t be done; but alas, that would merely be fueling a war I believe to be pointless in the grand scheme of things. Are we not all gamers? Are we not all stressed with mountains of work and assignments? Are we not all procrastinating when we should be working on said assignments? While our systems may be different, our goals are very much the same. And with that, I hope you find my advice helpful on your quest for good video games.

Best,

Parker

Categories
Operating System

Basic Wi-Fi Troubleshooting on macOS

From time to time, you may find yourself in a situation where Wi-Fi isn’t working for you on your computer whilst on campus. This is a quick and basic guide to helping you getting back online.

Disconnect & Reconnect

This is the easiest method to execute. While holding the Option key, click on the Wi-Fi icon in the Apple Bar. You’ll see something like this:

menu

Click on “Disconnect from eduroam”, and the Wi-Fi icon will dim immediately. Seconds later it will reconnect, provided you are on campus where your computer is picking up eduroam. This will solve the majority of issues that are related to connectivity.

Deleting the Eduroam Profile

This will be a multi-step, but simple process. Begin by opening System Preferences, and click on Profiles button.

profile

In the Profiles menu, select the Eduroam profile, and hit the delete key on your keyboard.

profile_menu

The system will prompt you if you are sure you want to remove. Confirm the removal.profile_pop

Once the profile is removed, consult this article to set up Eduroam on your laptop. This method will solve a vast majority of authentication related issues, particularly after a password reset.

Rearranging the Order of Preferred Networks

There will be times that your computer, for one reason or another, is configured to connect to the UMASS network over the Eduroam network. Whereas the Eduroam network is secured, and does not require a log in each time you connect, the UMASS network is not secure, and will prompt for log in information, preventing usual network access.

To change this, first open System Preferences, and then click on Network.

network

Once in the Network menu, hit Advanced.

advanced

In the Advanced menu, under the Wi-Fi submenu, make sure that UMASS is underneath eduroam. This tells the computer to attempt to connect to eduroam before attempting to connect with UMASS.beforeafter

Hit OK, and close the menu. The computer may prompt you if you wanted to apply the settings – hit Apply.

Gathering information for UMass IT

If any of the methods above did not work, and our consultants are not able to resolve your issue over email, we may ask you for certain technical info, such as BSSID, IP address, and MAC address. Most of the info we ask can be easily retrieved when you click on the Wi-Fi icon while holding the Option key.

wifi_info

Hope this information was helpful!

Categories
Operating System

PRAM and SMC Resets

Among quick fixes for many issues on a Mac are PRAM and SMC resets.Image result for smc reset

PRAM stands for parameter random access memory, which can contain settings such as speaker volume, screen resolution, startup disk selection, and recent kernel panic information. Performing a PRAM reset can fix a number of issues, such as wifi connectivity, drives not showing up or screens not adjusting properly. To do a PRAM reset, all that has to be done is turning the Mac device on and holding Command + Option + P + R until the machine chimes a second time. This process should be done for a longer amount of time on a Late 2016 MacBook Pro because it doesn’t have a startup chime. However, PRAM resets are actually a thing of the past. Today the majority of Macs in use; ones manufactured after 2008, actually primarily use NVRAM to store many of these settings. A PRAM reset and a NVRAM reset are mostly the same, it resets less volatile ram to default factory settings, fixing a number of potential issues. NVRAM stands for non-volatile random access memory and is reset in the same manner as PRAM is.

SMC stands for system management controller, this is only on Intel-based Macs. SMC resets reset this controller which is a part of the machine that deals with hardware and power management. This system management controller reset can fix problems dealing with the fans, lights, power and system performance.. There are a variety of ways to reset the SMC depending on the kind of Mac you’re working with. A desktop mac, such as a Mac Pro, Mac Mini, or iMac requires disconnecting the power cord from the machine, waiting 15 seconds, plugging it back in, waiting another 15 seconds and then turning the Mac back on. With a Mac laptop with a non-removable battery, shut the Mac down connect the Mac to its power adapter. Hold shift, option and control on the left side, then press the power button, release all keys and then turn the Mac on normally. For Mac laptops manufactured 2008 or before with removable batteries, turn the machine off, disconnect the power cable, remove the battery. Press the power button and hold for 5 seconds. Put the battery back in, reconnect its power cable, and turn the Mac back on.

Categories
Operating System

The Advancement of Prosthetics

Whether it’s veterans, amputees, or those born with certain abnormalities, prosthetics have allowed millions to live ordinary lives and do things they never thought possible. The idea of prosthetics is not a new one however. Ever since Greek and Roman times in the B.C era, doctors were attaching wooden stumps to those without legs, arms, toes, etc. However, the technology behind prosthetics has only picked up in the recent 20th and 21st centuries.

But how does it work? When I think of moving my arms or my legs, I am physically able to do so. But what if I had a prosthetic? How in the world do I make this limb-shaped computer actually do stuff? Well the answer is quite impressive actually. This is definitely one of those medical practices that just makes me go “wait, we can do that?”

If you are a healthy individual, you are able to move your limbs due to electrical signals that your brain is sending through nerves towards your muscles. Your muscles receive this electrical signal and either contract or relax. But if I were to amputate, let’s say, everything below your right knee, where would that electrical signal go? The signal would still go along the nerve towards your lower leg, but it would hit a dead end and there would be no response or outcome. In order to make your newly attached prosthetic usable, some rewiring needs to be done inside the body.how-are-prosthetics-made

Doctors are able to perform what is called targeted muscle reinnervation – in this process, doctors redirect those electrical signals to another muscle in the body; the chest for example. Now, the nerves that once controlled your lower leg, would now contract your chest muscles. You’re probably thinking, how does contracting my chest help the fact that I’m missing part of my leg. This is valuable, because the electrical activity of these chest muscles can be sensed with electrodes and used to provide control signals to a prosthetic limb. The end result is that just by thinking of moving your amputated leg, you cause the prosthetic leg to move instead.

Even outside of the biological aspect of prosthetics, they are truly feats of engineering. Since no two human bodies are physically exactly the same, all prosthetics need to be specifically designed to each patient. A wide variety of materials are used to create the actual limb, including acrylic resin, carbon fiber, thermoplastics, silicone, aluminum, and titanium. To create a life-like appearance, a foam cover can be applied and shaped to match the real limb. A flexible skin-like covering will be applied over the foam to give it the life-like appearance.

Prosthetics have given millions the opportunity to live a normal life and the technologies behind prosthetics is only getting better. Newer technologies allow people to move their prosthetic limbs without having any invasive surgeries or neural rewiring. The future is here, let’s just make sure we don’t all turn into robots.

Categories
Operating System

Forget About It!: How to forget a network in Windows 10

Sometimes, it’s better to just forget!

One of the most common tropes in the tech support world is the tried and true “have you tried turning it off and turning it back on again?”. Today, we’ll be examining how we can apply this thinking to helping solve common internet connectivity issues.

While it’s one of the best things to do before trying other troubleshooting steps, “forgetting” your wireless network is not a step most people think to do right away. Forgetting a network removes any configuration settings from your computer and will cause it to longer try to automatically connect to it. This is one way to try to fix configuration settings that just didn’t get it right the first time.

Today, we’ll be examining how to “forget” a network on Windows 10 in four quick, easy steps!

  1.  Navigate to the settings page and select “Network & Internet” settings
  2. Select “Wifi” from the left menu, then select “Manage known networks”.settings2
  3. Find your network, click on it, then select the “Forget” button.settings3
  4. Open up your available networks, and try to reconnect to the network you would usually connect.

settings4

And that’s it!

While this may not solve connectivity issues, it is a good place to start. May this quick tutorial help you troubleshoot wireless problems you may have. If issues persist, you should next try to examine potential service outages, your network card, or, in the case of home networks, your modem/router.

 

Categories
Hardware

A Fundamental Problem I See with the Nintendo Switch

Nintendo’s shiny new console will launch on March 3rd…or wait, no…Nintendo’s shiny new handheld will launch on March 3rd…Wait…hold on a second…what exactly do you call it?

The Nintendo Switch is something new and fresh that is really just an iteration on something we’ve already seen before.

In 2012, The Wii U, widely regarded as a commercial flop, operated on the concept that you could play video games at home with two screens rather than one. The controller was a glorified tablet that you couldn’t use as a portable system. At most, if your grandparents wanted to use the television to watch Deal or No Deal, you could take the tablet into the other room and stream the gameplay to its display.

Two months later, Nvidia took this concept further with the Nvidia Shield Portable. The system was essentially a bulky Xbox 360 controller with a screen you could stream your games to from your gaming PC. The system also allowed you to download light games from the Google Play store, so while it wasn’t meant to be treated as a handheld, it could be used as one if you really wanted to.

Then, a full year after the release of the Wii U, Sony came out with the PlayStation 4. Now, if you owned a PlayStation Vita from 2011, you could stream your games from your console to your Vita. Not only would this work locally, but you could also do it over Wi-Fi. So, what you had was a handheld that could also play your PS4 library from anywhere that had a strong internet connection. This became an ultimately unused feature as Sony gave up trying to compete with the 3DS. As of right now, Sony is trying to implement this ability to stream over Wi-Fi to other devices, such as phones and tablets.

Screen Shot 2017-02-15 at 10.23.57 AM

And now we have the Nintendo Switch. Rather than make a system that can stream to a handheld, Nintendo decided to just create a system that can be both. Being both a handheld and a console might seem like a new direction when in reality I’d like to think it’s more akin to moving in two directions at once. The Wii U was a dedicated console with an optional function to allow family to take the TV from you, the Nvidia Shield Portable was an accessory that allowed you to play your PC around the house, and the PlayStation Vita was a handheld that had the ability to connect to a console to let you play games anywhere you want. None of these devices were both a console and a handheld at once, and by trying to be both, I think Nintendo might be setting themselves up for problems down the road.

Screen Shot 2017-02-15 at 10.22.24 AM

Screen Shot 2017-02-15 at 10.39.27 AM

Remember the Wii? In 2006, the Wii was that hot new item that every family needed to have. I still remember playing Wii bowling with my sisters and parents every day for a solid month after we got it for Christmas. It was a family entertainment system, and while you could buy some single player games for it, the only time I ever see the Wii getting used anymore is with the latest Just Dance at my Aunt’s house during family get-togethers. Nobody really played single player games on it, and while that might have a lot to do with the lack of stellar “hardcore” titles, I think it has more to do with Nintendo’s mindset at the time. Nintendo is a family friendly company, and gearing their system towards inclusive party games makes sense.

Screen Shot 2017-02-15 at 10.24.24 AM

Nintendo also has their line of 3DS portable systems. The 3DS isn’t a family system; everyone is meant to have their own individual devices. It’s very personal in this sense; rather than having everyone gather around a single 3DS to play party games on, everyone brings their own. Are you starting to see what I’m getting at here?

 

Nintendo is trying to appeal to both the whole family and create a portable experience for a single member of the family. I remember unboxing the Wii for Christmas with my sisters. The Wii wasn’t a gift from my parents to me; it was a gift for the whole family. I also remember getting my 3DS for Christmas, and that gift had my name on it and my name alone. Now, imagine playing Monster Hunter on your 3DS when suddenly your sisters ask you to hand it over so they can play Just Dance. Imagine having a long, loud fight with your brother over who gets to bring the 3DS to school today because you both have friends you want to play with at lunch. Just substitute 3DS with Nintendo Switch, and you’ll understand why I think the Switch has some trouble on the horizon.

You might argue that if you’re a college student who doesn’t have your family around to steal the switch away, this shouldn’t be a problem. While that might be true, remember that Nintendo’s target demographic is and has always been the family. Unless they suddenly decide to target the hardcore demographic, which it doesn’t look like they’re planning on doing, Nintendo’s shiny new console/handheld will probably tear the family apart more than it will bring them together. When you’re moving in two directions at once, you’re bound to split in half.

 

Categories
Android Apps iOS

Fitbit, Machine Learning, and Sleep Optimization


Photo: Fitbit Blog

My big present for Christmas this year was a Fitbit Charge 2. I’d wanted one for a while, but not for anything Fitness related. While I do like to keep track of my active lifestyle choices, I didn’t desire one with fitness in mind at all. My model Fitbit’s key feature (the reason I ditched my reliable $10 Casio watch for it) is its heart rate monitor. The monitor on my Charge 2 takes the form of two green, rapidly flashing LED lights. Visually and technically, it’s similar to the light you may be familiar with seeing underneath an optical mouse. Instead of tracking motion, though, this light’s reflection keeps track of the subtle changes in my skin’s color as blood pumps in and drains from my capillaries. It sends the data on time between color changes to my phone, which sends the information through a proprietary algorithm to determine my heart rate. Other algorithms take into account my average heart rate and my lowest heart rate to calculate my resting heart rate (55).

But in the end, these are all just numbers. Some people (like me) just like having this data, but what can you actually do with it? Well, the Fitbit has another interesting feature. It uses your heart rate and motion information to determine when you’ve fallen asleep, when you’ve woken up, and whether you’re sleeping deeply or restlessly. I can check my phone every morning for a graphical representation of my sleep from the previous night, and determine how well I slept, how long I slept, and how my sleep fits in with my desired regular schedule (11:45 to 7:45). Kind of cool, right?

With a new market emphasis on machine learning, and sleep researchers making strides in answering fundamental questions, things are about to get a lot cooler.

Everybody has experienced miraculous three-hour slumbers that leave them feeling like they slept a full night, and heartbreaking ten-hour naps that make them question whether they slept at all. Although most of us consider those simple anomalies, scientists have caught on, and are actively studying this phenomenon. From what I’ve gleaned online, scientists that study sleep find that allowing a sleeping subject to complete REM cycles (lasting about 90 minutes, with variation) results in fuller and more restoring sleep. In other words, 7 hours and 30 minutes can result in a better sleep than a full 8 hours. It sounds like quackery, but the evidence is widely available, peer-reviewed, and convincing to the layperson.

Machine learning has been a buzzword for at least the past year. The concept itself is worthy of an entire post, but to summarize it for my purposes, it’s a broad term that refers to programming algorithms that adjust their behavior based on data input. For example, programs that predict what a customer wants to buy will show ads to that customer on a variety of platforms and decide where to show those ads more often, based on how much time the customer spends on each platform. Machine learning is essentially automating programs to use big data to improve their predictive or deductive capabilities.

Let’s bring this all together for a look into the future: If my Fitbit can keep track of my heartbeat to a precise enough degree to determine when I am in REM sleep — or can use an intelligent, learning-capable algorithm to set alarms that give me an optimal amount of sleep — I can have a personalized, automatic alarm that adapts to my habits and improves my quality of rest. Would that convince you to buy one?

Categories
Operating System

What is Data Forensics?

Short History of Data Forensics

The concept of data forensics was created in the 1970s with the first acknowledged data crime seen in Florida, 1978, where deleting files to hide evidence became considered illegal. The field gained traction through the 20th century with the FBI creating the Computer Analysis and Response Team quickly followed by the creation of the British Fraud Squad. The small initial size of these organizations created a unique situation where civilians were brought in to assist with investigations. In fact, it’s acceptable to say that computer hobbyists in the 1980s and 1990s gave the profession traction, as they assisted government agencies in developing software tools for investigating data related crime. The first conference on digital evidence took place in 1993 at the FBI Academy in Virginia; it was a huge success, with over 25 countries attending, it concluded in the agreement that digital evidence was legitimate and that laws regarding investigative procedure should be drafted. Until this point, no federal laws had been put in place regarding data forensics, somewhat detracting from its legitimacy. The last section of history takes place in the 2000s, which marks the field’s explosion in size. The advances seen in home computing during this time allowed for the internet to start playing a larger part in illegal behavior, as well as more powerful software both to aid and counteract illegal activity. At this point, government agencies were still aided greatly by grassroots computer hobbyists who continued to help design software for the field.

Why is it so Important?

The first personal computers, while incredible for their time, were not capable of many operations, especially when compared to today’s machines. These limitations were bittersweet, as they limited the illegal behavior available. With hardware and software continuing to develop at a literally exponential rate, coupled with the invention of the internet, it wasn’t long before crimes increased with parallel severity. For example, prior to the internet, someone could be caught in possession of child pornography (a fairly common crime associated with data forensics) and that would be the end of it; they would be prosecuted and their data confiscated. Post-internet, someone could be in possession of the same materials, however they could now be guilty of distribution across the web, greatly increasing the severity of the crime, as well as how many others might be involved. 9/11 sparked a realization for the necessity for further development in data investigation. Though no computer hacking or software manipulation aided in the physical act of terror, it was discovered later on that there was traces of data leading around the globe that pieced together a plan for the attack. Had forensics investigations been more advanced than they were at the time, a plan might have been discovered and the entire disaster avoided. A more common use for data forensics is to discover fraud in companies, and contradictions in their server system’s files. Investigations as such tend to take a year or longer to complete given the sheer amount of data that has to be looked through. Bernie Madoff, for example, used computer algorithms to change the origin of the money being deposited into his investors’ accounts so that his own accounts did not drop at all. In this case, more than 36 billion dollars were stolen from clients. That magnitude is not uncommon for fraud of such a degree. Additionally, if a company declares bankruptcy, it can often follow that they must submit data for analysis to make sure no one is benefiting from the company’s collapse.

How Does Data Forensics Work?

The base procedure for collecting evidence is not complicated. Judd Robbins, a renowned computer forensics expert, describes the sequence of events as following:

The computer is first collected, and all visible data – meaning data that does not require any algorithms or special software to recover – copied exactly to another file system or computer. It’s important that the actual forensics process not take place on the accused’s computer in order to insure no contamination in the original data.

Hidden data is then searched for, including deleted files or files that have been purposefully hidden from plain view and sometimes requiring extensive effort to recover.

Beyond simply making invisible to the system or deleting files, data can also be hidden in places on the hard drive that it would not logically be. A file could possibly be disguised as a registry file in the operating system to avoid suspicion. This kind of sorting the unorthodox parts of the hard drive can be incredibly time consuming.

While all of this is happening a detailed report must be updated that keeps track of not only the contents of the files, but if any of them were encrypted or disguised. In the world of data forensics, merely hiding certain files can lead to an accusation of probable cause.

Tools

Knowing the workflow of investigations is useful for a basic understanding, but the types of tools that have been created to assist investigators are the core of discovering data, leaving the investigators to interpret the results. While details of these tools is often kept under wraps to prevent anti-forensics tools from being developed, their basic workings are public knowledge.

Data Recovery tools are algorithms which detect residual charges on the sectors of a disk to essentially guess what might have been there before (this is how data recovery works too). Reconstruction tools do not have a 100% success rate, as some data could be simply too spread out to recover. Deleted data can be compared to an unsolved puzzle with multiple solutions, or perhaps a half burnt piece of paper. It’s possible to only recover some of the data too, and therefore chance comes into play again as to whether that data will be useful or not.

We’ve mentioned previously the process of copying the disk in order to protect the original copy. A Software or Hardware Write tool is in charge of copying the disk, while insuring that none of the metadata is altered in the process. The point of this software is to be untraceable so that an investigator does not leave a signature on the disk. You could think of accidentally updating the metadata as putting your digital fingerprints on the crime scene.

Hashing tools are used to compare one disk to another. If an investigator were to compare two different servers together with thousands of gigabytes of data, it would take years and years to go through to look for something that may not even exist. Hashing is a type of algorithm that simply runs through one disk piece by piece and tries to identify a similar or identical file on a different one. The nature of hashing makes it excellent for fraud investigations as it allows the analyst to check for anomalies that would indicate tampering.

Though many other tools exist, and many are developed as open source for operating systems such as Linux, these are the fundamental types of tools used. As computers continue to advance, more tools will inherently be invented to keep up with them.

Difficulties During Investigations

The outline of the process makes the job seem somewhat simple, if not a little tedious. What excites experts in the field is the challenge of defeating the culprit’s countermeasures that they may have put in place. These countermeasures are referred to a ‘Anti-Forensics’ tools and can range as far in complexity as the creator’s knowledge of software and computer operations. For example, every time a file is opened the ‘metadata’ is changed – metadata refers to the information about the file, not what’s inside it, regarding things such as last time opened, date created and size – which can be an investigator’s friend or foe. Forensic experts are incredibly cautious to not contaminate metadata while searching through files, as doing so can compromise the integrity of the investigation; it could be crucial to know the last time a program was used or a file opened. Culprits with sufficient experience can edit metadata to throw off investigators. Additionally, files can be masked as different kinds of files as to also confuse investigators. For example, a text file containing a list of illegal transactions could saved as a .jpeg file and the metadata edited so that the investigator would either pass over it, thinking a picture irrelevant, or perhaps open the picture to find nothing more than a blank page or even an actual picture of something. They would only find the real contents of the file if they thought to open it with a word processor as it was originally intended.

Another reason data is carefully copied off the original host is to avoid any risk of triggering a programmed ‘tripwire’ so to speak. Trying to open a specific file could perhaps also activate a program to scramble the hard drive to avoid any other evidence being found. While deleted data can be recovered, a process called ‘scrambling’ cannot. Scrambling the disk rewrites random bits to the entire drive. Overwriting data is impossible to undo in this case, and can therefore protect incriminating evidence. That being said if such a process occurs it offers compelling reason to continue the investigation if someone has gone to such an extent to keep data out of the hands of the police.

Additionally, remote access via the internet can be used to alter data on a local computer. For this reason, it is common practice for those investigating to sever any external connections the computer may have.

Further, data forensics experts forced to be meticulous, as small errors can result in corrupted data that can no longer be used as evidence. More than just fighting the defendant’s attempt to hide their data, analysts fight with the law to keep their evidence relevant and legal. Accidentally violating someone’s rights to data security can result in evidence being thrown out. Just with any legal search a warrant is needed and not having one will void any evidence found. Beyond national legal barriers, the nature of the internet allows users to freely send files between countries with ease. If information is stored in another country, it requires international cooperation to continue the investigation. While many countries inside NATO and the UN are working on legislation that would make international data investigations easier, storing data around the globe remains a common tool of hackers and other computer criminals to maintain anonymity.

Looking Forward

Data security is a serious concern in our world, and will grow in importance given our everyday reliance on digital storage and communication. As computer technology continues to advance at the pace it is, both forensics and anti-forensics tools will continue to advance as more advanced and literate software is developed. With AI research being done at research universities across the world, it is quite possible the future forensics tools will be adaptive, and learn to find patterns by themselves. We already have learning security tools such as Norton or McAfee virus protection for home computers which remember which programs you tell it are safe and make educated guesses in future based on your preferences. This is only scratching the surface of what is capable from such software, leaving much to be discovered in the future. With the advancement in software comes the negative too, with more powerful resources for cyber criminals to carry out their operations undetected. Data Forensics, and information security as a whole, then, can be seen as a never ending race to stay in front of computer criminals. As a result, the industry continues to flourish, as new analysts are always needed with software advances taking place every day.

Categories
Operating System

CPU Overclocking: Benefits, Requirements and Risks

The Benefits of Overclocking

Overclocking is, essentially, using the settings present on the motherboard in order to have the CPU run at higher speeds than what it’s set to run by default. This comes at the cost of increased heat production, as well as potential reduction of lifespan, though for many people the benefits far outweigh the risks.

Overclocking allows you to basically get ‘free’ value from your hardware, potentially letting the CPU last longer before it needs an upgrade, as well as just generally increasing performance in high demand applications like gaming and video editing. A good, successful overclock can grant as much as a 20% performance increase or more, as long as you’re willing to put in the effort.

Requirements 

Overclocking is pretty simple nowadays, however, there are some required supplies and specifications to consider before you’ll be able to do it. For most cases, only computers that you put together yourself will really be able to overclock, as pre-built ones will rarely have the necessary hardware, unless you’re buying from a custom PC builder.

The most important thing to consider is whether or not your CPU and Motherboard even support overclocking. For Intel computers, any CPU with a “K” on the end of it’s name, such as the recently released i7-7700k, will be able to overclock. AMD has slightly different rules, with many more of their CPUs being unlocked for overclockers to tinker with. Always check the specific SKU that you’re looking at on the manufacturer’s website, so you can be sure it’s unlocked!

Motherboards are a bit more complicated. For Intel chips, you’ll need to pick up a motherboard that has a “Z” in the chipset name, such as the Z170 and Z270 motherboards which are both compatible with the previously mentioned i7-7700k. AMD, once again, is a bit different. MOST of their motherboards are overclock-enabled, but once again you’re going to want to look at the manufacturer’s websites for whatever board you’re considering.

Another thing to consider is the actual overclocking-related features of the motherboard you get. Any motherboard that has the ability to overclock will be able to overclock to the same level (though this was not always the case), but some motherboards have built in tools to make the process a bit easier. For instance, some Asus and MSI motherboards in particular have what is essentially an automated overclock feature. You simply click a button in the BIOS (the software that controls your motherboard), and it will automatically load up a fairly stable overclock!

Of course, the automatic system isn’t perfect. Usually the automated overclocks are a bit conservative, which guarantees a higher level of stability, at the cost of not fully utilizing the potential of your chip. If you’re a tinkerer like me who wants to get every drop of performance out of your system, a manual overclock is much more effective.

The next thing to consider is your cooling system. One of the major byproducts of overclocking is increased heat production, as you usually have to turn up the stock voltage of the CPU in order to get it to run stably at higher speeds. The stock coolers that come in the box with some CPUs are almost definitely not going to be enough, so much so that Intel doesn’t even include them in the box for their overclockable chips anymore!

You’re definitely going to want to buy a third party cooler, which will run you between 30-100 dollars for an entry level model, depending on what you’re looking for. Generally speaking, I would stick with liquid cooling when it comes to overclocks, with good entry level coolers like the Corsair h80i and h100i being my recommendations. Liquid cooling may sound complicated, though it’s fairly simple as long as you’re buying the all-in-one units like the Corsair models I mentioned above. Custom liquid cooling is a whole different story, however, and is WAY out of the scope of the article.

If you don’t want to fork over the money for a liquid cooling setup, air cooling is still effective on modern CPUS. The Coolermaster Hyper Evo 212 is a common choice for a budget air cooler, running just below 40 bucks. However, air cooling isn’t going to get you the same low temperatures as liquid cooling, which will not let you get as high of an overclock unless you want to compromise the longevity of your system.

The rest of the requirements are pretty mundane. You’re going to want a power supply that can handle the higher power requirement of your CPU, though to be honest this isn’t really an issue anymore. As long as you buy a highly rated power supply from a reputable company of around 550 watts or higher, you should be good for most builds. There are plenty of online “tier-lists” for power supplies; stick to tier one or two for optimal reliability.

The only other thing you’ll need to pick up is some decent-quality thermal compound. Thermal compound, also called thermal paste, is basically just a grey paste that you put between the CPU cooler and the CPU itself, allowing for more efficient heat transfers. Most CPU coolers come with thermal paste pre-applied, but the quality can be dubious depending on what brand the cooler is. If you want to buy your own, I recommend IC Diamond or Arctic Silver as good brands for thermal compound.

Risks

Overclocking is great, but it does come with a few risks. They aren’t nearly as high as they used to be, given the relative ease of modern overclocking, but they’re risks to be considered nonetheless.

When overclocking, what we’re doing is increasing the multiplier on the CPU, allowing it to run faster. The higher we clock the CPU, the higher voltage the CPU will require, which will thus produce more heat.

Heat is the main concern of CPUs, and too much heat can lead to a shorter lifespan for the chip. Generally speaking, once you’re CPU is consistently running at above 86 degrees Celsius, you’re starting to get into the danger zone. Temperatures like that certainly won’t kill your CPU immediately, but it could overall lower the functional lifespan.

For most people, this won’t really be an issue. Not many people nowadays plan on having their computer last for 10 years and up, but it could be something to be worried about if you do want to hold onto the computer for awhile. However, as long as you keep your temperatures down, this isn’t really something you need to worry about. Heat will only outright kill a CPU when it exceeds around 105 degrees Celsius, though your CPU should automatically shut off at that point.

The other main risk is voltage. As previously mentioned, in order to achieve higher overclocks you also need to increase the voltage provided to the CPU. Heat is one byproduct of this which is a problem, but the voltage itself could also be a problem. Too high voltage on your CPU can actually fry the chip, killing it.

For absolute safety, many people recommend not going above 1.25v, and just settling for what you can get at that voltage. However, most motherboards will allow you to set anything up to 1.4v before notifying you of the danger.

My personal PC runs at 1.3v, and some people do go as high as 1.4v without frying the chip. There really isn’t a hard and fast rule, just make sure to check out what kind of voltages people are using for the hardware you bought, and try to stick around that area.

Essentially, as long as you keep the CPU cool (hence my recommendation for liquid cooling), and keep the voltages within safe levels (I’d say 1.4v is the absolute max, but I don’t recommend even getting close to it), you should be fine. Be wary, however, as overclocking will void some warranties depending on who you’re buying the CPU from, especially if the CPU ends up dying due to voltage.

Afterthoughts – The Silicon Lottery

Now that you understand the benefits of overclocking, as well as the risks and requirements, there’s one more small concept; the silicon lottery.

The silicon lottery is the commonly used term to describe variance in CPU overclocks, depending on your specific CPU. Basically; just because you bought the same model of CPU as someone else doesn’t mean it will run at the same temperatures and overclock to the same point.

I have an i7-7700k that I’m cooling with a Corsair h100i v2. I am able to hold a stable 5ghz overclock at 1.3v, the stock settings being 4.2ghz at around 1.2v. However, not everyone is going to achieve results like this. Some chips might be able to hit 5ghz at slightly below 1.3v, some might only be able to achieve 4.8 at 1.3v. It really is just luck, and is the main reason that overclocking takes time to do. You can’t always set your CPU to the same settings as someone else, expecting it to work. It’s going to require some tinkering.

Hopefully, this article has helped you understand overclocks more. There are some risks, as well as some specific hardware requirements, but from my perspective they’re all worth the benefits.

Always remember to do your research, and check out a multitude of overclocking guides. Everyone has different opinions on what voltages and temperatures are safe, so you’ll need to check out as many resources as possible.
If you do decide that you want to try overclocking, then I wish you luck, and may the silicon lottery be ever in your favor!

Categories
Security Web

Private Data in the Digital Age

Former U.S. spy agency contractor Edward Snowden is wanted by the United States for leaking details of U.S. government intelligence programs
Former U.S. spy agency contractor Edward Snowden is wanted by the United States for leaking details of U.S. government intelligence programs

In a scenario where someone has a file of information stored on a private server with the intent to keep it private, is it ever justified for someone else to expose a security flaw and post the information anonymously on the internet? There exists a fine line where “It depends” on the scenario. But this classification simply does not do the case justice as there are extraneous circumstances where this kind of theft and distribution is justifiable.

One such case is whistle-blowing. Edward Snowden is still a man of much controversy. Exiled for leaking sensitive government documents, some label him a hero, others a traitor. Snowden was former Special Forces and later joined the CIA as a technology specialist. He stole top-secret documents pertaining to the National Security Agency and FBI tapping directly into the central servers of leading U.S Internet companies to extract personal data. Snowden leaked these documents to the Washington Post, exposing the PRISM code, which collected private data from personal servers of American citizens. This program was born out of a failed warrantless domestic surveillance act and kept under lock and key to circumvent the public eye. Americans were unaware and alarmed by the breadth of unwarranted government surveillance programs to collect, store, and search their private data.

Although Snowden illegally distributed classified information, the government was, in effect, doing the same but with personal data of its constituents. I would argue that Snowden is a hero. He educated the American people about the NSA overstepping their bounds and infringing upon American rights. Governments exist to ensure the safety of the populace, but privacy concerns will always be in conflict with government surveillance and threat-prevention. The government should not operate in the shadows; is beholden to its people, and they are entitled to know what is going on.

The United States government charged Snowden with theft, “unauthorized communication of national defense information,” and “willful communication of classified communications intelligence information to an unauthorized person.” The documents that came to light following Snowden’s leaks only pertained to unlawful practices, and did not compromise national security. Therefore, it appears as though the government is trying to cover up their own mistakes. Perhaps this is most telling in one of Edward Snowden’s recent tweets :

“Break classification rules for the public’s benefit, and you could be exiled.
Do it for personal benefit, and you could be President.” – @Snowden

This commentary on Hillary Clinton shows that in the eyes of the government who is right and wrong changes on a case to case basis. In many ways, Snowden’s case mirrors Daniel Ellsberg’s leak of the Pentagon Papers in 1971. The Pentagon Papers contained evidence that the U.S. Government had mislead the public regarding the Vietnam war, strengthening anti-war sentiment among the American populace. In both cases, whistle-blowing was a positive force, educating the public about abuses happening behind their back. While in general practice, stealing private information and distributing it to the public is malpractice, in these cases, the crime of stealing was to expose a larger evil and provide a wake-up call for the general population.

Alternatively, in the vast majority of cases accessing private files via a security flaw is malicious, and the government should pursue charges. While above I advocated for a limited form of “hacktivism,” it was a special case to expose abuses by the government which fundamentally infringed on rights to privacy. In almost all cultures, religions and societies stealing is recognized as wrongdoing and should rightfully be treated as such. Stealing sensitive information and posting it online should be treated in a similar manner. Publishing incriminating files about someone else online can ruin their life chances. For example, during the infamous iCloud hack, thousands of nude or pornographic pictures of celebrities were released online. This was private information which the leaker took advantage of for personal gain. For many female celebrities it was degrading and humiliating. Therefore, the leaker responsible for the iCloud leaks was not justified in  taking and posting the files. While the definition of leaking sensitive information for the “common good” can be in itself a blurred line, but a situation like the iCloud leak evidently did not fit in this category. Hacking Apple’s servers to access and leak inappropriate photos can only be labeled as a malevolent attack on female celebrities, which could have potentially devastating repercussions for their career.

While the iCloud hack was a notorious use of leaking private data in a hateful way, there are more profound ways which posting private data can destroy someone’s life. Most notably, stealing financial information and identification (such as SSID) can have a huge, detrimental effect on someone’s life. My grandmother was a victim of identity theft, where someone she knew and trusted stole her personal information and used it for personal gain. This same scenario plays out online constantly and can drain someone’s life savings, reduce their access to credit and loans, and leave them with a tarnished reputation. Again, we draw a line between leaking something in the public’s interest and exposing a security flaw for the leaker’s benefit. By gaining access to personal files, hackers could wreck havoc and destroy lives. Obviously this type of data breach is unacceptable, and cannot be justified.

Overall, taking sensitive material and posting it anonymously online can generally be regarded as malpractice, however, their are exceptions such as whistle-blowing where the leaker is doing so for the common good. These cases are far and few between, and the “bad cases” have harming repercussions which can follow someone throughout their life. Ultimately, to recall Snowden’s case, everyone has a right to privacy. This is why someone leveraging a security flaw and posting files online is wrong from the get go, because it supersedes personal secrecy. In an increasingly digital world it is difficult to keep anything private, but everyone has a fundamental right to privacy which should not be disrespected or infringed upon.

Categories
Operating System

The Touch Bar may seem like a gimmick, but has serious potential

macbook-pro-touch-bar-customize-100690194-origThe first iPhones came out in 2007. At that time people had Blackberrys and Palm PDAs – phones that came physical keyboards and a stylus. These iPhones were immediately praised for its aesthetics, but criticized for its limited functionality. As development that expanded functionalities of these iPhones took off, so did the phone itself. After wrestling the market with traditional styled PDAs, iPhones and Androids began leaving its competition in dust.

Jump forward to today. The new MacBook Pros now come with a touch strip (marketed as Touch Bar) in place of the function keys that reside in the first row. While they haven’t gone away, Apple decided that a touch strip would enable a more dynamic style of computing. Of course, Apple detractors look at this as a sign that Apple is running out of ideas and resorting to gimmicks.

I recently got my hands on one of these MacBook Pros, and yes, there are obvious shortcomings. Though the computer is beautifully engineered and designed, it’s questionable that the Touch Bar itself isn’t high definition (or retina display, as Apple would’ve marketed it). As far as using it, it does feel a little weird at first, since you don’t get a tactile response as opposed to using any other key on the keyboard, but I’ve gotten used to it. There are also some minor design flaws that might be of annoyance, such as the volume and brightness adjustment bar not being the most intuitive, the fact that I’ve managed to press the power button a couple times when I meant to use the delete key, and that some functions that the Touch Bar is largely advertised for are sometimes buggy, particularly when scrubbing through a video – so much for Apple’s reputation when it comes to quality control.

But it’s obvious to see why Apple might envision the Touch Bar as the next evolution in laptop computing. It’s clear that they don’t believe in a laptop/tablet hybrid ala the Surface Pro – not even Microsoft themselves are buying into it as much. But the dynamism that the Touch Bar offers, or perhaps more importantly, has the potential of offering, is way more appealing. And though the Touch Bar may seem limited in terms of functionality and usefulness, it’s a little like the original iPhone: a lot of it depends on the software development that follows.