TiVo Community
TiVo Community
TiVo Community
Go Back   TiVo Community > Main TiVo Forums > TiVo Home Media Features & TiVoToGo
TiVo Community
Reply
Forum Jump
 
Thread Tools
Old 03-31-2012, 10:28 AM   #1
johnh123
Registered User
 
Join Date: Dec 2000
Location: Over there
Posts: 415
which nas to get - pytivo, streambaby, etc

So i'm going to be getting a new nas. Consumer level, only need two drives for mirroring. I want to be able to stream to tivo premiere as well, ideally, or at least use pytivo- I don't want to have to keep a computer running all the time for that. Which nas can I easily set up pytivo on, and are there any consumer level nas that can run streambaby?
johnh123 is offline   Reply With Quote
Old 04-01-2012, 07:15 AM   #2
jcthorne
Registered User
 
Join Date: Jan 2002
Location: Houston
Posts: 1,828
There are several that could fit your requirements but the lower end Synology NAS units are well supported both here and at the Synology forums and will do what you want. That is, other than transcoding video on the fly. The low end units just don't have the processing power to do it. But Pytivo, vidmgr and jukebox all run well on the linux based Synology units and they are well regarded NAS units as well.

I will say that you should choose wisely in choosing the size of your NAS. Once all this works well for you, you may very well find your storage needs expanding. Its far cheaper to buy a NAS that can be expanded as your needs grow than to buy one that cannot and have to do it again. Been very happy with my 1511 that I started with 3 1TB drives and now at 7 3TBdrives, each upgrade and addition handeled by the NAS smoothly and without down time. The dynamic hybrid RAID functionality is very useful.
__________________
Current : Roamio Base with 2TB drive and 2 Premieres, OTA. kmttg, pyTivo, running with a Synology 1511 NAS....serving up the world.

Setup help for pytivo under windows:
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
jcthorne is offline   Reply With Quote
Old 04-01-2012, 10:18 AM   #3
Iluvatar
Registered User
 
Iluvatar's Avatar
 
Join Date: Jul 2006
Posts: 377
I have a Synology DS411slim with 4x2tb drives and a couple external USB/eSata drives. Running pyTivo on it along with some helpful utilities like sickbeard and transmission. Works just great. FFmpeg is really slow on these devices. While recompiling a custom FFmpeg build does help slightly, it is usually best to make sure what you put on the NAS to feed pyTivo is either TiVo compatible or remuxable by pyTivo for quick transfers.

I don't use streambaby but know that it requires Java. I am uncertain how Java performs or is installed onto the NAS devices so I would look into that if streambaby is a requirement.
__________________
My
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
- Read link for changes

To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
and
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
for OS X
Iluvatar is offline   Reply With Quote
Old 04-01-2012, 10:58 AM   #4
jcthorne
Registered User
 
Join Date: Jan 2002
Location: Houston
Posts: 1,828
Java on the NAS would be a problem. Would suggest using vidmgr and pytivo to effectively replace the function of streambaby from the NAS.
__________________
Current : Roamio Base with 2TB drive and 2 Premieres, OTA. kmttg, pyTivo, running with a Synology 1511 NAS....serving up the world.

Setup help for pytivo under windows:
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
jcthorne is offline   Reply With Quote
Old 04-01-2012, 12:19 PM   #5
wmcbrine
Resistance Useless
 
wmcbrine's Avatar
 
Join Date: Aug 2003
Posts: 8,758
You could also try my HME/VLC as a partial replacement for Streambaby. Despite the name, it can run without VLC. It will work in the same environment as the other programs mentioned above (i.e. just add Python). It's only a partial replacement because it lacks the ability to rebuffer files over 1.1 GB that Streambaby has, among other things. On the other hand, it handles RSS feeds directly (and live streams, but you probably don't want to try those without VLC).
__________________

To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
wmcbrine is offline   Reply With Quote
Old 04-01-2012, 12:22 PM   #6
johnh123
Registered User
 
Join Date: Dec 2000
Location: Over there
Posts: 415
I'm looking at the 212+. Drives are hot swappable so it seems I could start with 2x1 tb and expand over time up to 2x4 tb. By the time I need more than that it will be time for a new nas.

If I have say an mkv file, h264, aac 2.0, would it take a lot of time for pytivo to transcode that, or could you begin viewing within a reasonable amount of time?
johnh123 is offline   Reply With Quote
Old 04-01-2012, 12:34 PM   #7
wmcbrine
Resistance Useless
 
wmcbrine's Avatar
 
Join Date: Aug 2003
Posts: 8,758
If you use the "push" system, recent versions of pyTivo will remux that to an MP4 in practically no time, and send it to the TiVo without having to transcode it. However you then add an unknown delay due to the nature of push (it depends on TiVo.com's servers).
__________________

To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
wmcbrine is offline   Reply With Quote
Old 04-01-2012, 05:38 PM   #8
jcthorne
Registered User
 
Join Date: Jan 2002
Location: Houston
Posts: 1,828
My experience has been that the h.264 video DEPENDS on its encoding levels. The video rendering chip in the tivo is somewhat limited. For 720p, level 4.1 with 5 ref frames is usually reliable and plays without diffeculty. For 1080p24 and 1080i30, level 4.1 with NO MORE than 4 reference frames or Tivo has a high likelyhood of not working well. H.264 level 5 is not compatible.

In order to keep the WAF as high as possible, I prefer to prep all video on the server to tivo compatible files before pytivo ever sees them and avoid hicups at play time. Many will remux and play fine on the fly as pytivo is designed to do. I prefer to eliminate the 10% or more that do not.

All of my video is stored as mp4 files with ac3 audio, level 4.1 with 4 ref frames. These files also play as is on my WDTV and EVO 3D smartphone.

Tivo plays ac3 5.1 or aac 2.0. It cannot handle aac 5.1 or DTS although pytivo can recode the audio pretty quickly.
__________________
Current : Roamio Base with 2TB drive and 2 Premieres, OTA. kmttg, pyTivo, running with a Synology 1511 NAS....serving up the world.

Setup help for pytivo under windows:
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
jcthorne is offline   Reply With Quote
Old 04-01-2012, 10:03 PM   #9
Iluvatar
Registered User
 
Iluvatar's Avatar
 
Join Date: Jul 2006
Posts: 377
Quote:
Originally Posted by jcthorne View Post
My experience has been that the h.264 video DEPENDS on its encoding levels. The video rendering chip in the tivo is somewhat limited. For 720p, level 4.1 with 5 ref frames is usually reliable and plays without diffeculty. For 1080p24 and 1080i30, level 4.1 with NO MORE than 4 reference frames or Tivo has a high likelyhood of not working well. H.264 level 5 is not compatible.
I agree based on my experience. I do wish FFmpeg reported in greater detail so the info could be parsed by pyTivo and checked against.
__________________
My
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
- Read link for changes

To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
and
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
for OS X
Iluvatar is offline   Reply With Quote
Old 04-01-2012, 10:26 PM   #10
wmcbrine
Resistance Useless
 
wmcbrine's Avatar
 
Join Date: Aug 2003
Posts: 8,758
Recent FFmpegs do provide more detail, if we can parse it --

Quote:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'oncgaga.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf53.3.0
Duration: 01:22:20.10, start: 0.000000, bitrate: 2202 kb/s
Stream #0.0(und): Video: h264 (Constrained Baseline), yuv420p, 640x360 [PAR 1:1 DAR 16:9], 2005 kb/s, 29.97 fps, 29.97 tbr, 2997 tbn, 59.94 tbc
Metadata:
creation_time : 1970-01-01 00:00:00
Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16, 192 kb/s
Metadata:
creation_time : 1970-01-01 00:00:00
I think the highlighted bits may be relevant.
__________________

To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
wmcbrine is offline   Reply With Quote
Old 04-02-2012, 06:32 AM   #11
jcthorne
Registered User
 
Join Date: Jan 2002
Location: Houston
Posts: 1,828
The mp41 might be the level but it does not appear to show the number of reference frames used in the stream. Not sure where else pytivo could get this info. I use a utility mediainfo but its windows based.
__________________
Current : Roamio Base with 2TB drive and 2 Premieres, OTA. kmttg, pyTivo, running with a Synology 1511 NAS....serving up the world.

Setup help for pytivo under windows:
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
jcthorne is offline   Reply With Quote
Old 04-02-2012, 07:20 AM   #12
Iluvatar
Registered User
 
Iluvatar's Avatar
 
Join Date: Jul 2006
Posts: 377
Quote:
Originally Posted by jcthorne View Post
The mp41 might be the level but it does not appear to show the number of reference frames used in the stream. Not sure where else pytivo could get this info. I use a utility mediainfo but its windows based.
mediainfo works on Linux and OS X as well. pyTivo could definitely parse the output from it, but it would be yet another binary dependency that would require the user to download and provide to pyTivo.

It looks like the FFmpeg output could at least be used for determining if the video is >L4.1. However it seems the information is not reliably provided for every h264 video and relies on the original encoder to provide the metadata.
__________________
My
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
- Read link for changes

To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
and
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
for OS X
Iluvatar is offline   Reply With Quote
Old 04-02-2012, 09:38 AM   #13
johnh123
Registered User
 
Join Date: Dec 2000
Location: Over there
Posts: 415
Now I see the netgear readynas ultra 2 plus can be had for about the same as the synology 212+ - the netgear has a dual core atom processor and 1gb of ram - much better specs than the synology- anyone have any experience with pytivo on the netgear box?
johnh123 is offline   Reply With Quote
Old 04-08-2012, 01:58 PM   #14
lrhorer
Registered User
 
Join Date: Aug 2003
Location: San Antonio, Texas, USA
Posts: 6,849
Quote:
Originally Posted by johnh123 View Post
I'm looking at the 212+. Drives are hot swappable so it seems I could start with 2x1 tb and expand over time up to 2x4 tb.
Oh, hey! Sure enough, Hitachi has some 4T drives available. When did those get released? I missed that one.

Anyway, to your suggestion, I would avoid drive expansions, if I were you. They certainly can be done, and I have done a few myself, but I would recommend spindle expansions, rather than drive expansions. Of course that also means abandoning your plan to deploy RAID1. I would recommend going with RAID5 or RAID6 and increasing the number of spindles for growth purposes, rather than swapping to larger drive sizes. When it does come time to increase the drive size, I would recommend building a whole new array with fewer spindles and copying the data over to the new array. Note a second array does not necessarily require a new chassis, as long as the chassis has room for the additional spindles.

Quote:
Originally Posted by Proszell View Post
By the time I need more than that it will be time for a new nas.
I wouldn't really bet on it. It's amazing how fast data can expand. Of course your needs may well be different, but in the beginning my arrays expanded a lot faster than 2T per year. In your case, that would mean you could be looking at your first expansion in less than six months, and a new NAS in less than 2 years. It's up to you, but I would plan for more aggressive expansion, especially at first.

Quote:
Originally Posted by Proszell View Post
If I have say an mkv file, h264, aac 2.0, would it take a lot of time for pytivo to transcode that, or could you begin viewing within a reasonable amount of time?
Well, in pull mode, one can always start viewing immediately. What happens is one soon encounters a pause if the server cannot keep up with the program bit rate. In push mode, the TiVo will enforce enough buffering that one should not encounter any pauses, but this does mean one may not be able to start viewing immediately. Note that with S3 class TiVos, recoding the video to h.264 in a .mp4 container before hand will allow the data to transfer via push much, much faster without transcoding on the fly.
lrhorer is offline   Reply With Quote
Old 04-09-2012, 11:19 AM   #15
johnh123
Registered User
 
Join Date: Dec 2000
Location: Over there
Posts: 415
Ok, looks like i'm going with the ds411+. Room for growth and if I can run sabznzb, pytivo and vidmgr I think I should be set for some time. I think I will start with 2x1.5T Raid 1, then when I add another 1.5T I will have Raid 5 3T, then when I add another I will have Raid 5 4.5T, if I understand it correctly. I'm not really one who keeps movies after I watch them, so this is primarily for photos and home movies.
johnh123 is offline   Reply With Quote
Old 04-09-2012, 11:18 PM   #16
lrhorer
Registered User
 
Join Date: Aug 2003
Location: San Antonio, Texas, USA
Posts: 6,849
If it were me, I would avoid an online RAID migration. I would start with RAID5 or RAID6 with a missing member. You can do 2 x 1.5T RAID5 / 6 , which with 2 spindles will give you the same amount of storage as RAID1.
lrhorer is offline   Reply With Quote
Old 04-10-2012, 01:21 AM   #17
johnh123
Registered User
 
Join Date: Dec 2000
Location: Over there
Posts: 415
Will synology let me set up a raid 5/6 with only two drives? 'What about shr- recommended or not?
johnh123 is offline   Reply With Quote
Old 04-10-2012, 04:01 PM   #18
jcthorne
Registered User
 
Join Date: Jan 2002
Location: Houston
Posts: 1,828
The Synology system will let you do better than that. Setup the 2 drives as a Synology Hybrid RAID to start. This will allow you to add drives and increase the size of the drives as you go without the need for a full rebuild. It does the rebuild in the background over a couple of hours/days. The size of the Hybrid raid volume will always be total size of all drives, less the size of the largest drive. Always with single redundancy. It supports dual redundancy as well but on a 4 bay machine kind of pointless. Its a very stable and well done system.
__________________
Current : Roamio Base with 2TB drive and 2 Premieres, OTA. kmttg, pyTivo, running with a Synology 1511 NAS....serving up the world.

Setup help for pytivo under windows:
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
jcthorne is offline   Reply With Quote
Old 04-10-2012, 04:29 PM   #19
lrhorer
Registered User
 
Join Date: Aug 2003
Location: San Antonio, Texas, USA
Posts: 6,849
Quote:
Originally Posted by jcthorne View Post
The Synology system will let you do better than that. Setup the 2 drives as a Synology Hybrid RAID to start. This will allow you to add drives and increase the size of the drives as you go without the need for a full rebuild.
As a veteran of more RAID reshapes than I care to recall, I can say without reservation that just because a system supports (or claims to support) OLRM and various non-standard geometries is not a good reason for the user to try to take advantage of those features. Even under the best circumstances, a RAID re-shape is not a trivial thing. Typically, every single byte of data must be read and re-written to new locations. The more complex the re-map, the more frail it is.

If anyone cares to doubt me, I suggest they subscribe to the Linux-RAID mailing list for a few weeks.

Quote:
Originally Posted by jcthorne View Post
It does the rebuild in the background over a couple of hours/days.
That's not the issue. Virtually all RAID reconfiguration, and certainly any OLRM is accomplished in the background. The amount of time an OLRM will take depends on how large the RAID members are and how fast they can be read and written. A typical array with 1T members will take at a minimum about a day to a day and a half to re-shape. The entire time, the array is in jeopardy, and anything from a drive failure on down can potentially trash the entire array. With 3T members, expect it to take closer to a week. The odds of one out of four drives failing during any given week are not reassuringly low.

Quote:
Originally Posted by jcthorne View Post
The size of the Hybrid raid volume will always be total size of all drives, less the size of the largest drive. Always with single redundancy. It supports dual redundancy as well but on a 4 bay machine kind of pointless.
That depends on his needs. Some people demand triple or even qudaruple redundancy. RAID1 (and RAID10) certainly supports multiple mirrors. RAID5 allows a good compromise for moderate sized arrays. RAID6 allows one to achieve something closer to the redundancy of RAID1 with a level of economy more like RAID5.

Last edited by lrhorer : 04-10-2012 at 04:37 PM.
lrhorer is offline   Reply With Quote
Old 04-10-2012, 04:32 PM   #20
lrhorer
Registered User
 
Join Date: Aug 2003
Location: San Antonio, Texas, USA
Posts: 6,849
Quote:
Originally Posted by johnh123 View Post
Will synology let me set up a raid 5/6 with only two drives? 'What about shr- recommended or not?
I can't speak specifically to the Synology, but a RAID5 array can certainly in general be built from any number of spindles, including just one (with one missing). Similarly, a RAID6 array with one member missing can be built from 2 drives. Adding the 3rd member should cause the array to automatically resync.
lrhorer is offline   Reply With Quote
Old 04-11-2012, 09:27 AM   #21
jcthorne
Registered User
 
Join Date: Jan 2002
Location: Houston
Posts: 1,828
Quote:
Originally Posted by lrhorer View Post
That's not the issue. Virtually all RAID reconfiguration, and certainly any OLRM is accomplished in the background. The amount of time an OLRM will take depends on how large the RAID members are and how fast they can be read and written. A typical array with 1T members will take at a minimum about a day to a day and a half to re-shape. The entire time, the array is in jeopardy, and anything from a drive failure on down can potentially trash the entire array. With 3T members, expect it to take closer to a week. The odds of one out of four drives failing during any given week are not reassuringly low.
Using the Synology Hybrid RAID this has not been the case. During one of my expansions, from 6 3T drives to 7, about 50% of the way through, one of the drives failed. I did not loose the array. I replaced the drive (drive 2 in this case) and the rebuild started. Then the expansion restarted. The redundancy is expnaded to the new drive before the array is expanded.

Also expansion from 5 to 6 took approx 36 hrs. Not a week. Total time for 6 to 7 was 3 days but that was because of the failure.

I am not saying its perfect, nor that there is not increased risk while the array is expanded, just that for most incidents, its not a total loss. I do and highly recommend that a full backup be in place prior to any array reconfig.

Also I really saw no difference between an expansion by increasing a drive size vs adding a drive, either operation took about the same time to complete.
__________________
Current : Roamio Base with 2TB drive and 2 Premieres, OTA. kmttg, pyTivo, running with a Synology 1511 NAS....serving up the world.

Setup help for pytivo under windows:
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
jcthorne is offline   Reply With Quote
Old 04-11-2012, 05:29 PM   #22
lrhorer
Registered User
 
Join Date: Aug 2003
Location: San Antonio, Texas, USA
Posts: 6,849
Quote:
Originally Posted by jcthorne View Post
Using the Synology Hybrid RAID this has not been the case. During one of my expansions, from 6 3T drives to 7, about 50% of the way through, one of the drives failed. I did not loose the array.
No RAID reshape is ever supposed to lose data, but that does not mean it doesn't happen. Take it from someone who has suffered through well over a dozen array failures. Note an expansion from N to N + 1 drives all of uniform size does not have as many critical phases, compared with some OLRM operations.

Quote:
Originally Posted by jcthorne View Post
I replaced the drive (drive 2 in this case) and the rebuild started. Then the expansion restarted.
No, it resumed. It is not possible to miraculously instantaneously revert to the old structure. While the rebuild was occurring, part of your array had N members, and part had N + 1 members. Had the OS lost track of exactly which part had N members and which part had N + 1 members, your array would have been hosed. Fortunately in your case the failed operation was not the one which kept track of how far the expansion had progressed.

The most likely cause of such a failure would be a power failure or a drive controller failure, possibly accompanied by or caused by a drive failure.

Quote:
Originally Posted by jcthorne View Post
The redundancy is expnaded to the new drive before the array is expanded.
That depends on the type of re-shape and exactly when it fails. Take for example an expansion from 4 data drives to 5 data drives on any RAID level higher than 1. Every bit of data on the drives must be re-organized and re-written. Ignoring parity for the moment, prior to the re-shape, the data is divided up into 4 sets of moderatly sized chunks. Member 1 contains chunks 1, 5, 9, 13, 17, etc. Member 2 contains chunks 2, 6, 10, 14, 18, and so forth. After the re-shape is complete, the data is divided up into 5 sets, not 4. As the re-shape progresses, chunk 5 is moved from member 1 to member 5. Chunk 6 is moved from member 2 to member 1, and so on. Going from 4 drives to 5, less than 17% of the information originally on any member winds up back on that same member (allowing for parity). The superblock contains the information that tells the OS driver what the organization is, but during the re-shape, some fraction of the data in the array is no longer organized that way, so the OS has to keep track of what portion of the array does not match the superblock, which is re-written either at the beginning or the end of the migration. Obviously, this information is stored somewhere and updated every time the blocks are moved around on the drive members. If that information is corrupted (perhaps by a failed write to a drive in the array), then POOF! goes the entire array, or at least some portion of it.

Keep in mind as well that for RAID levels greater than 1, parity must also be re-calculated and written. In the case of RAID6, at least twice the amount of parity data is calculated and written compared to RAID5. A 5 member RAID5 array is divided into 4 sets of data information and 1 of parity. When the array is expanded to 6 members, there are now 5 sets of data completely different in organization to the original 4 plus a parity that is completely different in every aspect than the original parity, being a checksum of chunks 1 - 5, 6 - 10, 11 -15, etc., rather than of 1 - 4, 5 - 9, 10 - 14, etc.

Quote:
Originally Posted by jcthorne View Post
Also expansion from 5 to 6 took approx 36 hrs. Not a week.
That depends on a number of factors. The time it takes to re-sync the array is directly proportional to the member size and inversely proportional to the write speed of the drives being written. An array built of small, fast drives will re-sync much, much faster than one built of larger, slower drives. It also depends on how much of the data is being re-located. In addition, all the most popular RAID management software limits the amount of bandwidth allocated to the re-shape so that users do not complain about slow file access. Many admins will go in after hours and increase the resource limits in order to speed up the array re-shape and then back the bandwidth down again during working hours so they don't impact the users. Of course, in the case of an array in someone's home, doling out dribs and drabs of data for things like a TiVo, one may choose to simple let the re-shape rip. In any case, however, a consumer class drive can probably manage a continuous write of about 30 MB/Sec or so. On a 1T drive, that works out to no less than 9.3 hours at maximum speed. A 3T drive stretches that to 27.3 hours. If the drive is being read as well as written, it can easily double that to more than 55 hours, and that is best case.

The brand or type of array is not terribly relevant, since generally speaking it is the drive that limits the re-sync performance, not usually the array.

Quote:
Originally Posted by jcthorne View Post
I am not saying its perfect, nor that there is not increased risk while the array is expanded, just that for most incidents, its not a total loss.
That "most incidents" are not fatal is poor consolation to anyone whose array is toast. The wise person will eliminate as much as possible the most dangerous paths by not taking them in the first place, especially not if the path would be taken merely for convenience' sake.

Quote:
Originally Posted by jcthorne View Post
Also I really saw no difference between an expansion by increasing a drive size vs adding a drive, either operation took about the same time to complete.
Adding a drive most of whose space does not get consumed by the array takes precisely the same amount of time as adding a drive of an appropriate size. Syncing a 10T array built of five 2T drives will take at least twice the time that syncing a 10T array built of ten 1T drives will, for the same sync operation. Again, the principal limit is the size of the members divided by the speed of the members.

Last edited by lrhorer : 04-11-2012 at 06:16 PM.
lrhorer is offline   Reply With Quote
Old 04-11-2012, 06:09 PM   #23
MichaelK
Registered User
 
Join Date: Jan 2002
Location: NJ
Posts: 7,299
Back to the original question I've always been impressed by the HP N40L. You basically get a barebones nas box without an OS. You can put freenas on it (for free) or windows home server (for around 50 bucks) or win7 or pretty much any Linux you want. Just lots of options. If you watch there are frequently deals for the box or the box with extra memory or with a second drive.

Since it takes "normal" os' any app you want can run on them.

The raid choices are limited off the shelf but I think Freenas has some sort of software raid. Whs has pay add ins that give lots of raid like options similar to the old whs drive extender feature.

If you prefer a real hardware raid the box takes pci express cards so you can add it easy.

All that in a nice small package.

Just tossing it out there.
MichaelK is offline   Reply With Quote
Old 04-12-2012, 10:17 AM   #24
hefe
Rebus Philbin
 
hefe's Avatar
 
Join Date: Dec 2000
Location: Mile High, but sober
Posts: 26,521
Just a data point...my NAS is unRaid in an HP microserver case. The PC that I installed pyTivo on yesterday has the shares mapped as drives. I put in the Movies share in the pyTivo configuration, and it's working just fine as far as I can tell.
__________________
These are not the hammer.

Hefe's a cruel man, but fair.~edhara
That hefe, he's really smart!~Fish Man
hefe is offline   Reply With Quote
Old 04-12-2012, 10:35 AM   #25
lrhorer
Registered User
 
Join Date: Aug 2003
Location: San Antonio, Texas, USA
Posts: 6,849
Quote:
Originally Posted by hefe View Post
Just a data point...my NAS is unRaid in an HP microserver case. The PC that I installed pyTivo on yesterday has the shares mapped as drives. I put in the Movies share in the pyTivo configuration, and it's working just fine as far as I can tell.
There's nothing particularly wrong with such a deployment, but I prefer to load all the servers on the same machine that hosts the array. It certainly will work, however, to load the servers on an external machine and host the files from a machine dedicated to nothing more than providing the storage.
lrhorer is offline   Reply With Quote
Old 04-12-2012, 10:39 AM   #26
hefe
Rebus Philbin
 
hefe's Avatar
 
Join Date: Dec 2000
Location: Mile High, but sober
Posts: 26,521
It may be a next step to figure out how to load servers on the unRaid system, which is Linux based...but one step at a time...still learning how all this stuff plays together!
__________________
These are not the hammer.

Hefe's a cruel man, but fair.~edhara
That hefe, he's really smart!~Fish Man
hefe is offline   Reply With Quote
Old 04-12-2012, 01:15 PM   #27
jcthorne
Registered User
 
Join Date: Jan 2002
Location: Houston
Posts: 1,828
Quote:
Originally Posted by lrhorer View Post
No RAID reshape is ever supposed to lose data, but that does not mean it doesn't happen. Take it from someone who has suffered through well over a dozen array failures. Note an expansion from N to N + 1 drives all of uniform size does not have as many critical phases, compared with some OLRM operations.

Adding a drive most of whose space does not get consumed by the array takes precisely the same amount of time as adding a drive of an appropriate size. Syncing a 10T array built of five 2T drives will take at least twice the time that syncing a 10T array built of ten 1T drives will, for the same sync operation. Again, the principal limit is the size of the members divided by the speed of the members.
lrhorer, you know more about RAID array internal archetecture than I ever want to know. Was just relateing what I have observed as a user of the Synology Hybrid Raid system on my NAS. As for the risk, I look at it a bit differently. Since I will not endeaver into an array reshape without a complete backup, my risk is one of loss of time not loss of data. If the reshape fails, I have to do a long and tedious restore. If it succeeds, I save time and have an up to date backup. In this case its ALL about convienience, I do not have other users to worry about nor is the loss of the system for a few days going to be catastrophic. The convienience of expanding the array as my storage needs grow is very useful to me. There is risk, but in my case the risk of loss of data is pretty small.

On the expansion times. I know at least on my 1511, all 7 drives are working at the same time in parrallel during a expansion. It also dynamicly allocates resources to maintain file access speeds vs background tasks. If I dont use the files, the process goes much faster. IE it makes much more progress during the overnight hours without me having to do anything to reallocate resources.

On the redundancy, when a drive is added to an exisiting N+1 array. The first task is the array becomes an N+2 )dual redundancy. It then expands the array block by block recalculating the parity such that as it goes, each becomes an n+1 block again. At no time is there any blocks that are n+0. Yes the allocation tables could get corrupted but those are n+2 redundant (3 identical tables) to the very end and then n+1 when the operation is complete. This way if one does end up corrupted during the expansion, the nas knows what the right answer is. Simple 2oo3 voting.

The SHR allows the expansion by enlarging a drive already in the array. IE if there are 2ea 3T drives and 3ea 2T drives in the array with a capacity of 9T with n+1 redundancy, I can replace one of the 2t drives with a 3t and end up with a 10T n+1 array.

These automated array management and expansion capabilities are the main reason I ended up going with a Synology NAS vs many of the others. I actually considered a DROBO box for a week or so but ended up droping it due to its lack luster performance and closed environment.
__________________
Current : Roamio Base with 2TB drive and 2 Premieres, OTA. kmttg, pyTivo, running with a Synology 1511 NAS....serving up the world.

Setup help for pytivo under windows:
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
jcthorne is offline   Reply With Quote
Old 04-12-2012, 03:22 PM   #28
lrhorer
Registered User
 
Join Date: Aug 2003
Location: San Antonio, Texas, USA
Posts: 6,849
Quote:
Originally Posted by jcthorne View Post
lrhorer, you know more about RAID array internal archetecture than I ever want to know. Was just relateing what I have observed as a user of the Synology Hybrid Raid system on my NAS. As for the risk, I look at it a bit differently. Since I will not endeaver into an array reshape without a complete backup, my risk is one of loss of time not loss of data. If the reshape fails, I have to do a long and tedious restore. If it succeeds, I save time and have an up to date backup. In this case its ALL about convienience, I do not have other users to worry about nor is the loss of the system for a few days going to be catastrophic. The convienience of expanding the array as my storage needs grow is very useful to me. There is risk, but in my case the risk of loss of data is pretty small.
Your point is taken. Indeed, there is a difference between the loss of the data and the loss of an array. Sometimes the array or a part of it is recoverable without requiring a restore from backup.

The salient point here is the OP needs to take this into account one way or the other. The worst mistake made by noobs (and it is a very common one) is to think of a RAID array as fault-proof. It is fault-tolerant, not fault-proof. Having a RAID array (even a multiple spindle RAID1 array) does not eliminate the need for a good, comprehensive backup strategy.

Quote:
Originally Posted by jcthorne View Post
On the expansion times. I know at least on my 1511, all 7 drives are working at the same time in parrallel during a expansion.
This should normally always be the case, but the limit is still how fast the data can be written to the slowest member. Writing 1T of data to a 1T hard drive takes a very specific amount of time.

Quote:
Originally Posted by jcthorne View Post
It also dynamicly allocates resources to maintain file access speeds vs background tasks.
Typically a minimum and maximum bandwidth is specified for the sync operation. Lowering the maximum will make sure more resources are available for real-time access. Raising the minimum will force the array to spend more resources on the re-sync operation. Anything between the two parameters is up for grabs.

Quote:
Originally Posted by jcthorne View Post
If I dont use the files, the process goes much faster.
Well, somewhat faster, depending on the level of file access. What really can add up is the seek times. Even for the re-sync itself, the drive heads are having to swing back and forth across the platters to first read several sectors form one part of the drive and then write a similar amount of information to a different part of the drive. Add in a random seek to serve up data from the array every few ms, and it can really bog down the resync process.


Quote:
Originally Posted by jcthorne View Post
On the redundancy, when a drive is added to an exisiting N+1 array.
'Sorry, by "N + 1", I did not mean an array with N data members plus 1 parity. I meant an array with N data members migrating to an array with N + 1 data members.

Quote:
Originally Posted by jcthorne View Post
The first task is the array becomes an N+2 )dual redundancy.
That is more or less an expansion to RAID6 from RAID5. It doesn't increase the space on the array. More importantly, none of the blocks would be in the correct order. Expanding a RAID5 array from 5 members to 6 requires reading chunks 1 - 5 (reading member #1 twice for chunks 1 and 5), calculating parity, writing it to member #6, and then writing chunk #5 to member #5, overwriting the parity formerly stored there. Next, chunks 6 - 10 are read (this time reading member #2 twise for chunks 6 and 10), parity is calculated and written to member #5, and then chunk 6 is written to member 1, chunk 7 is written to member #2, ... and chunk 10 is written to member #6. Next, chunks 11 - 15 are read, parity is written to member #4, and the data is written back to the members starting with #3. If the re-sync routine is written properly, it is true it is easy to maintain parity so there is always at least one copy of the parity, but that doesn't prevent the OS frojm losing the information that tells it how many of the chunks have been converted from 5 members to 6.

RAID3 and RAID4 assign an actual drive for parity, but RAID5 and RAID6 employ distributed parity, which means on a 6 member RAID5 array, 20% of the parity is written to each drive.

Quote:
Originally Posted by jcthorne View Post
It then expands the array block by block recalculating the parity such that as it goes, each becomes an n+1 block again.
The parity has to be re-calculated for every block as it changes from N chunks to N + 1 chunks.

Suppose we have the following set of data, and that each chunk is only 8 bits. When we start out, the blocks look like this:

1 10000000
2 11000000
3 11100000
4 11110000
P 10101111
-----------
5 00000001
6 00000011
7 00000111
P 11110101
8 00001111
-----------
9 00011111
A 00111111 ...

When we expand those blocks onto six drives, they now look like this:

1 10000000
2 11000000
3 11100000
4 11110000
5 00000001
P 10101110
-----------
6 00000011
7 00000111
8 00001111
9 00011111
P 11010100
A 00111111
-----------

Quote:
Originally Posted by jcthorne View Post
At no time is there any blocks that are n+0.
Yeah, that's easy enough. All that is required is the information be written to empty sectors, and then the pointers updated to show the new locations. The big problem occurs if an error occurs when the pointers are being updated.

Quote:
Originally Posted by jcthorne View Post
Yes the allocation tables could get corrupted but those are n+2 redundant (3 identical tables) to the very end and then n+1 when the operation is complete.
RAID doesn't have allocation tables. The superblock defines the extent of the array and its organization. That's why it only takes a moment to create an array, as well as why the size of the array is fundamentally identical to the total size of the members. If the superblock says it is a RAID5 array with 6 members, then the RAID driver assumes member #1 will contain chunks 1, 6, 11, 16, etc. If that organization changes half-way throug the array, then a "ghost" superblock that tells the the driver it needs to start looking for 6 chunks per block, rather than 5 starting at block #2000 needs to be created and maintained. OF course the superblock is small, and can be easily duplicated multiple times, but what does the driver do if two of the copies of the "ghost" superblock disagree on where the 6 chunk blocks start?

Quote:
Originally Posted by jcthorne View Post
The SHR allows the expansion by enlarging a drive already in the array. IE if there are 2ea 3T drives and 3ea 2T drives in the array with a capacity of 9T with n+1 redundancy, I can replace one of the 2t drives with a 3t and end up with a 10T n+1 array.
And badly degraded performance, not to mention unbalanced I/O loading. It's a good way to grind away at the 3T spindles. It most certainly can be done, but a better strategy is to buy drives all of one size, that being the lowest cost per GB.

At this point one should note the member size and the spindle size do not have to be the same under any RAID implementation. For example, my main array consisted of fourteen 1T spindles, while the backup consisted of ten 1.5T spindles. When I upgraded my main array, I purchased eight 3T drives (before the flood, thank goodness!) and copied the data over from the fourteen spindle array to a new array built of the 3T spindles. Suddenly, I had a secondary array that was a bit too small, and fourteen spare 1T drives. I took four of the 1T drives and built a pair of 2T RAID0 arrays. I then took those arrays and attached them to the array built of 1.5T spindles. It means 500G of each pair of 1T drives is unused, but it also means any time I like I can replace a pair of 1T drives with a single 1.5T drive, if need be. As an aside, since the RAID0 arrays are striped across two drives, those 2T members are very fast compared with the 1.5T drives.

Last edited by lrhorer : 04-12-2012 at 03:42 PM.
lrhorer is offline   Reply With Quote
Old 04-15-2012, 06:56 PM   #29
johnh123
Registered User
 
Join Date: Dec 2000
Location: Over there
Posts: 415
Quote:
Originally Posted by jcthorne View Post
Java on the NAS would be a problem. Would suggest using vidmgr and pytivo to effectively replace the function of streambaby from the NAS.
I see that a number of packages available for the atom synology use java. If the synology can handle serviio, I'd think it could handle streambaby.
johnh123 is offline   Reply With Quote
Old 04-16-2012, 12:21 PM   #30
jcthorne
Registered User
 
Join Date: Jan 2002
Location: Houston
Posts: 1,828
Quote:
Originally Posted by lrhorer View Post
Your point is taken. Indeed, there is a difference between the loss of the data and the loss of an array. Sometimes the array or a part of it is recoverable without requiring a restore from backup.

And badly degraded performance, not to mention unbalanced I/O loading. It's a good way to grind away at the 3T spindles. It most certainly can be done, but a better strategy is to buy drives all of one size, that being the lowest cost per GB..

Thanks for the great explainations. I learn more as this goes on. I have to admit most of what I 'think' I know about RAID comes mostly from advertising and instructions manuals, not an intimate knowledge of the internals.

One item I can relate from the above is with regard to an array across mixed size drives, a combo of 2T and 3T drives in my case. While performance may be degraded internal to the NAS, from an external user standpoint where data rates are limited to a dual gigabit pipe, there was no difference in speed for the mixed array vs the array at all 3T spindles. I can consistantly move (2) 85+ MBps streams between the NAS and 2 users. (each limited to a single GBit connection.) Perhaps its my network that needs a bit of improvement, not the NAS.

I guess my point was only that I have been pretty impressed with my Synology NAS ability to 'take care of itself' from a RAID managment point of view and allow me to grow the array as my needs increase. I know there is a lot going on under the hood. Thanks again for pulling back the curtin so we can see a bit of it.
__________________
Current : Roamio Base with 2TB drive and 2 Premieres, OTA. kmttg, pyTivo, running with a Synology 1511 NAS....serving up the world.

Setup help for pytivo under windows:
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
jcthorne is offline   Reply With Quote
Reply
Forum Jump




Thread Tools


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Advertisements

TiVo Community
Powered by vBulletin® Version 3.6.8
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
vBulletin Skins by: Relivo Media

(C) 2013 Magenium Solutions - All Rights Reserved. No information may be posted elsewhere without written permission.
TiVoŽ is a registered trademark of TiVo Inc. This site is not owned or operated by TiVo Inc.
All times are GMT -5. The time now is 12:55 AM.
OUR NETWORK: MyOpenRouter | TechLore | SansaCommunity | RoboCommunity | MediaSmart Home | Explore3DTV | Dijit Community | DVR Playground |