Please .... !!!!

Agar mendapatkan nilai throughput yang tinggi

Kami sarankan semua clients menggunakan antena antena grid

sangat sangat kami anjurkan

menggunakan antena GRID

Gunakan antena seperti ini

http://www.mikrotik.com

ENJOY ... SHARE

LAZUARDY NETWORK

Best Connection

http://www.mikrotik.com

memperpanjang kabel usb

Langkah – langkah nya ::

UTP >>> USB
putih oranye – oranye (2 kabel) >>> merah
putih hijau (1 kabel) >>> putih
hijau (1 kabel) >>> hijau
putih biru, biru, putih coklat, coklat (4 kabel) >>> hitam

Semoga dengan cara diatas, noise yang tadinya besar bisa diminimalisir bahkan tidak ada sama sekali. dan kabel USB dapat diperpanjang terserah anda (tentunya tidak lebih dari 100 meter, SOALNYA RECOMENDED UTP CABLE 100 METER ).

SEMOGA BERHASIL

Menambah Harddisk Baru di Pfsense

Menambah Harddisk Baru di Pfsense


Using Command Line Utilities

Using Slices

This setup will allow your disk to work correctly with other operating systems that might be installed on your computer and will not confuse other operating systems' fdisk utilities. It is recommended to use this method for new disk installs. Only use dedicated mode if you have a good reason to do so!
# dd if=/dev/zero of=/dev/da1 bs=1k count=1
# fdisk -BI da1 #Initialize your new disk
# bsdlabel -B -w da1s1 auto #Label it.

# bsdlabel -e da1s1 # Edit the bsdlabel just created and add any partitions.
# mkdir -p /1
# newfs /dev/da1s1e # Repeat this for every partition you created.
# mount /dev/da1s1e /1 # Mount the partition(s)

# vi /etc/fstab # Add the appropriate entry/entries to your /etc/fstab.
If you have an IDE disk, substitute ad for da.

Dedicated

If you will not be sharing the new drive with another operating system, you may use the dedicated mode. Remember this mode can confuse Microsoft operating systems; however, no damage will be done by them. IBM's OS/2® however, will “appropriate” any partition it finds which it does not understand.
 
# dd if=/dev/zero of=/dev/da1 bs=1k count=1

# bsdlabel -Bw da1 auto
# bsdlabel -e da1               # create the `e' partition
# newfs /dev/da1e
# mkdir -p /1

# vi /etc/fstab               # add an entry for /dev/da1e
# mount /1
An alternate method is:
# dd if=/dev/zero of=/dev/da1 count=2
# bsdlabel /dev/da1 | bsdlabel -BR da1 /dev/stdin

# newfs /dev/da1e
# mkdir -p /1
# vi /etc/fstab                   # add an entry for /dev/da1e
# mount /1 

Untuk Proxy 

#chown -R proxy:proxy /cache160gb
# vi /etc/fstab
/dev/ad6s1a        /cache160gb    ufs    rw,noasync,noatime      2   2

youtube no range

Ringkasan ini tidak tersedia. Harap klik di sini untuk melihat postingan.

pfSense: Speed-up Transparent Squid Proxy

Its been a few days that I did some tweaking on Squid Proxy and it appears stable! This all came about as I was trying to speed-up data fetching and finding that for some reason the cache was just too slow for actual use. I wondered if it was at all worth it (obviously slow proxy means unhappy users ... especially if its your home users).

In gratitude to the discussion I found in the forum, its reposted and message re-arranged here in summary below:

Question:
Why squid is so slow?

Answer:
The default configuration of pfSense is a router not as a server, that is why kern.ipc.nmbclusters="0". Simply remove this line and Squid will be just fine.

Add the lines below to the /boot/loader.conf
kern.ipc.nmbclusters=32768
kern.maxfiles=65536
kern.maxfilesperproc=32768
net.inet.ip.portrange.last=65535

Alternatively, just delete it and replace with:
autoboot_delay="1"
#kern.ipc.nmbclusters="0"
hint.apic.0.disabled=1
kern.hz=100
#for squid
kern.ipc.nmbclusters="32768"
kern.maxfiles="65536"
kern.maxfilesperproc="32768"
net.inet.ip.portrange.last="65535"

Squid Proxy Server: Fine Tuning to Achieve Better Performance

Cache peers or neighbors Cache peers or neighbors are the other proxy servers with which our Squid proxy server can: Share its cache with to reduce bandwidth usage and access time Use it as a parent or sibling proxy server to satisfy its clients' requests Use it as a parent or sibling proxy server We normally deploy more than one proxy server in the same network to share the load of a single server for better performance. The proxy servers can use each other's cache to retrieve the cached web documents locally to improve performance. Let's have a brief look at the directives provided by Squid for communication among different cache peers. Declaring cache peers The directive cache_peer is used to tell Squid about proxy servers in our neighborhood. Let's have a quick look at the syntax for this directive: cache_peer HOSTNAME_OR_IP_ADDRESS TYPE PROXY_PORT ICP_PORT [OPTIONS] In this code, HOSTNAME_OR_IP_ADDRESS is the hostname or IP address of the target proxy server or cache peer. TYPE specifies the type of the proxy server, which in turn, determines how that proxy server will be used by our proxy server. The other proxy servers can be used as a parent, sibling, or a member of a multicast group. Time for action – adding a cache peer Let's add a proxy server (parent.example.com) that will act as a parent proxy to our proxy server: cache_peer parent.example.com parent 3128 3130 default proxy-only 3130 is the standard ICP port. If the other proxy server is not using the standard ICP port, we should change the code accordingly. This code will direct Squid to use parent.example.com as a proxy server to satisfy client requests in case it's not able to do so itself. The option default specifies that this cache peer should be used as a last resort in the scenario where other peers can't be contacted. The option proxy-only specifies that the content fetched using this peer should not be cached locally. This is helpful when we don't want to replicate cached web documents, especially when the two peers are connected with a high bandwidth backbone. What just happened? We added parent.example.com as a cache peer or parent proxy to our Squid proxy server. We also used the option proxy-only, which means the requests fetched using this cache peer will not be cached on our proxy server. There are several other options in which you can add cache peers, for various purposes, such as, a hierarchy. Quickly restricting access to domains using peers If we have added a few proxy servers as cache peers to our Squid server, we may have the desire to have a little bit of control over the requests being forwarded to the peers. The directive cache_peer_domain is a quick way to achieve the desired control. The syntax of this directive is quite simple: cache_peer_domain CACHE_PEER_HOSTNAME [!]DOMAIN1 [[!]DOMAIN2 ...] In the code, CACHE_PEER_HOSTNAME is the hostname or IP address of the cache peer being used when declaring it as a cache peer, using the cache_peer directive. We can specify any number of domains which may be fetched through this cache peer. Adding a bang (!) as a prefix to the domain name will prevent the use of this cache peer for that particular domain. Let's say we want to use the videoproxy.example.com cache peer for browsing video portals like Youtube, Netflix, Metacafe, and so on. cache_peer_domain videoproxy.example.com .youtube.com .netflix.com cache_peer_domain videoproxy.example.com .metacafe.com These two lines will configure Squid to use the videoproxy.example.com cache peer for requests to the domains youtube.com, netflix.com, and metacafe.com only. Requests to other domains will not be forwarded using this peer. Advanced control on access using peers We just learned about cache_peer_domain, which provides a way to control access using cache peers. However, it's not really flexible in granting or revoking access. That's when cache_peer_access comes into the picture, which provides a very flexible way to control access using cache peers using ACLs. The syntax and implications are similar to other access directives such as http_access. cache_peer_access CACHE_PEER_HOSTNAME allow|deny [!]ACL_NAME Let's write the following configuration lines, which will allow only the clients on the network 192.0.2.0/24 to use the cache peer acadproxy.example.com for accessing Youtube, Netflix, and Metacafe. acl my_network src 192.0.2.0/24 acl video_sites dstdomain .youtube.com .netflix.com .metacafe.com cache_peer_access acadproxy.example.com allow my_network video_sites cache_peer_access acadproxy.example.com deny all In the same way, we can use other ACL types to achieve better control over access to various websites using cache peers. Caching web documents All this time, we have been talking about the caching of web documents and how it helps in saving bandwidth and improving the end user experience, now it's time to learn how and where Squid actually keeps these cached documents so that they can be served on demand. Squid uses main memory (RAM) and hard disks for storing or caching the web documents. Caching is a complex process but Squid handles it beautifully and exposes the directives using squid.conf, so that we can control how much should be cached and what should be given the highest priority while caching. Let's have a brief look at the caching-related directives provided by Squid. Using main memory (RAM) for caching The web documents cached in the main memory or RAM can be served very quickly as data read/write speeds of RAM are very high compared to hard disks with mechanical parts. However, as the amount of space available in RAM for caching is very low compared to the cache space available on hard disks, only very popular objects or the documents with a very high probability of being requested again, are stored in cache space available in RAM. As the cache space in memory is precious, the documents are stored on a priority basis. Let's have a look at the different types of objects which can be cached. In-transit objects or current requests These are the objects related to the current requests and they have the highest priority to be kept in the cache space in RAM. These objects must be kept in RAM and if there is a situation where the incoming request rate is quite high and we are about to overflow the cache space in RAM, Squid will try to keep the served part (the part which has already been sent to the client) on the disk to create free space in RAM. Hot or popular objects These objects or web documents are popular and are requested quite frequently compared to others. These are stored in the cache space left after storing the in-transit objects as these have a lower priority than in-transit objects. These objects are generally pushed to disk when there is a need to generate more in RAM cache space for storing the in-transit objects. Negatively cached objects Negatively cached objects are error messages which Squid has encountered while fetching a page or web document on behalf of a client. For example, if a request to a web page has resulted in a HTTP error 404 (page not found), and Squid receives a subsequent request for the same web page, then Squid will check if the response is still fresh and will return a reply from the cache itself. If there is a request for the same page after the negatively cached object corresponding to that page has expired, Squid will check again if the page is available. Negatively cached objects have the same priority as hot or popular objects and they can be pushed to disk at any time in favor of in-transit objects. Specifying cache space in RAM So far we have learned about how the available cache space is utilized for storing or caching different types of objects with different priorities. Now, it's time to learn about specifying the amount of RAM space we want to dedicate for caching. While deciding the RAM space for caching, we should be neither greedy nor paranoid. If we specify a large percentage of RAM for caching, the overall system performance will suffer as the system will start swapping processes in case there is no free RAM left for other processes. If we use a very low percentage of RAM for caching, then we'll not be able to take full advantage of Squid's caching mechanism. The default size of the memory cache is 256 MB. Time for action – specifying space for memory caching We can use extra RAM space available on a running system after sparing a chunk of memory that can be utilized by the running process under heavy load. To find out the amount of free RAM available on our system, we can use either the top or free command. To find out the free RAM in Megabytes, we can use the free command as follows: $ free -m For more details, please check the top(1) and free(1) man pages. Now, let's say we have 4 GB of total RAM on the server and all the processes are running comfortably in 1 GB of RAM space. After securing another 512 MB for emergency situations where running processes may take extra memory, we can safely allocate 2.5 GB of RAM for caching. To specify the cache size in the main memory, we use the directive cache_mem. It has a very simple format. As we have learned before, we can specify the memory size in bytes, KB, MB, or GB. Let's specify the cache memory size for the previous example: cache_mem 2500 MB The previous value specified with cache_mem is in Megabytes. What just happened? We learned about calculating the approximate space in the main memory, which can be used to cache web documents and therefore enhance the performance of the Squid server by a significant margin. Have a go hero – calculating cache_mem for your machine Note down the total RAM on your machine and calculate the approximate space in megabytes that you can allocate for memory caching. Maximum object size in memory As we have limited space in memory available for caching objects, we need to use the space in an optimized way. We should plan to set this a bit low, as setting it to a too larger size will mean that there will be a lesser number of cached objects in the memory and the HIT (being found in cache) rate will suffer significantly. The default maximum size used by Squid is 512 KB, but we can change it depending on our value for cache_mem. So, if we want to set it to 1 MB, as we have a lot of RAM available for caching (as in the previous example), we can use the maximum_object_size_in_memory directive as follows: maximum_object_size_in_memory 1 MB This command will set the allowed maximum object size in memory cache to 1 MB. Memory cache mode With the newer versions of Squid, we can control which objects we want to keep in the memory cache for optimizing the performance. Squid offers the directive memory_cache_mode to set the mode that Squid should use to utilize the space available in memory cache. There are three different modes available: Mode Description always The mode always is used to keep all the most recently fetched objects that can fit in the available space. This is the default mode used by Squid. disk When the disk mode is set, only the objects which are already cached on a hard disk and have received a HIT (meaning they were requested subsequently after being cached), will be stored in the memory cache. network Only the objects which have been fetched from the network (including neighbors) are kept in the memory cache, if the network mode is set. Setting the mode is easy and can be set using the memory_cache_mode directive as shown: memory_cache_mode always This configuration line will set memory cache mode to always; this means that most recently fetched objects will be kept in the memory. Squid Proxy Server 3.1: Beginner's Guide Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid Published: February 2011 eBook Price: ₨831.00 Book Price: ₨1,386.00 See more Buy It Now Read more about this book (For more resources on Proxy Servers, see here.) Using hard disks for caching In the previous section, we learned about using the main memory for caching various types of objects or web documents to reduce bandwidth usage and enhance the end user experience. However, as the space available in RAM is small in size and we can't really invest a lot in the main memory as it's very expensive in terms of bytes per unit of money. As opposed to the mechanical disks, we prefer to deploy proxy servers with huge storage space which can be used for caching objects. Let's have a look at how to tell Squid about caching objects to disks. Specifying the storage space The directive cache_dir is used to declare the space on the hard disk where Squid will store or cache the web documents for use in future. Let's have a look at the syntax of cache_dir and try to understand the different arguments and options: cache_dir STORAGE_TYPE DIRECTORY SIZE_IN_Mbytes L1 L2 [OPTIONS] Storage types Operating systems implement filesystems to store files and directories on the disk drives. In the Linux/Unix world, ext2, ext3, ext4, reiserfs, xfs, UFS (Unix File System), and so on, are the popular filesystems. Filesystems also expose a few system calls such as open(), close(), read(), and so on, so that other programs can read/write/remove files from the storage. Squid also uses these system calls to interact with the filesystems and manage the cached objects on the disk. On top of the filesystems and with the help of the available system calls exposed by the filesystems, Squid implements storage schemes such as ufs, aufs, and diskd. All the storage schemes supported by the operating system are built by default. The ufs is a very simple storage scheme and all the I/O transactions are done using the main Squid process. As some of the system calls are blocking (meaning the system call will not return until the I/O transaction is complete) in nature, they sometimes cause delays in processing requests, especially under heavy load, resulting in an overall bad performance. ufs is good for servers with less load and high speed disks, but is not really preferable for busy caches. aufs is an improved version of ufs where a stands for asynchronous I/O. In other words, aufs is ufs with asynchronous I/O support, which is achieved by utilizing POSIX-threads (pthreads library). Asynchronous I/O prevents blocking of the main Squid process by some system calls, meaning that Squid can keep on serving requests while we are waiting for some I/O transaction to complete. So, if we have the pthreads library support on our operating system, we should always go for aufs instead of ufs, especially for heavily loaded proxy servers. The Disk Daemon (diskd) storage scheme is similar to aufs. The only difference is that diskd uses an external process for I/O transactions instead of threads. Squid and diskd process for each cache_dir (of the diskd type) to communicate using message queues and shared memory. As diskd involves a queuing system, it may get overloaded over time in a busy proxy server. So, we can pass two additional options to cache_dir which determines how Squid will behave in case there are more messages in the queues than diskd is able to process. Let's have a look at the syntax of the cache_dir directive for diskd as STORAGE_TYPE cache_dir diskd DIRECTORY SIZE_Mbytes L1 L2 [OPTIONS] [Q1=n] [Q2=n] The value of Q1 signifies the number of pending messages in the queue beyond which Squid will not place new requests for I/O transactions. Though Squid will keep on entertaining requests normally, it'll not be able to cache new objects or check cache for any HITs. HIT performance will suffer in this period of time. The default value of Q1 is 64. The value of Q2 signifies the number of pending messages in the queue beyond which Squid will cease to operate and will go in to block mode. No new requests will be served in this period until Squid receives a reply or the messages in the queue fall below this number. The default number of Q2 is 72. As you can see from the explanation of Q1 and Q2, if the value of Q1 is more than Q2, Squid will go in to block mode first. If the queue is full it will result in higher latency but better HIT ratio. If the value of Q1 is less than Q2, Squid will keep on serving the requests from the network even if there is no I/O. This will result in lower latency, but the HIT ratio will suffer considerably. Choosing a directory name or location We can specify any location on the filesystem for the directory name. Squid will populate it with its own directory structure and will start storing or caching web documents in the space available. However, we must make sure that the directory already exists and is writable by the Squid process. Squid will not create the directory if it doesn't exist already. Time for action – creating a cache directory The cache directory location may not be on the same disk or partition. We can mount another disk drive and specify that as the directory for caching. For example, let's say we have another drive connected as /dev/sdb and one of the partitions is /dev/sdb1, we can mount it to the /drive/ and use it right away. $ mkdir /drive/ $ mount /dev/sdb1 /drive/squid_cache/ $ mkdir /drive/squid_cache $ chown squid:squid /drive/squid_cache/ In the previous code, we created a directory /drive/ and mounted /dev/sdb1, the partition from the other disk drive, to it. Then, we created a directory squid_cache in the directory /drive/ and changed the ownership of the directory to Squid, so that Squid can have write access to this directory. Now we can use /drive/squid_cache/ as one of the directories with the cache_dir directive. What just happened? We mounted a partition from a different hard disk and assigned the correct ownership to use it as a cache directory for disk caching. Declaring the size of the cache This is the easy part. We must keep in mind that we should not append MB or GB to the number while specifying the size in this directive. The size is always specified in Megabytes. So, if we want to use 100 GB of disk space for caching, we should set size to 102400 (102400 MB/1024 = 100 GB). If we want to use the entire disk partition for caching, we should not set the cache size to be equal to the size of the partition because Squid may need some extra space for temporary files and the swap.state file. So, it's good practice to subtract 5-15 percent from the total disk space for temporary files and then set the cache size. Configuring the number of sub directories There are two arguments to cache_dir named as L1 and L2. Squid stores the cacheable objects in a hierarchical fashion in directories named so that it'll be faster to lookup an object in the cache. The hierarchy is of two-levels, where L1 determines the number of directories at the first level and L2 determines the number of directories in each of the directories at the first level. We should set L1 and L2 high enough so that directories at the second level don't have a huge number of files. Read-only cache Sometimes we may want our cache to be in read-only mode so that Squid doesn't store or remove any cached objects from it but continues to serve the content from it. This is achieved by using an additional option named no-store with the cache_dir directive. Please note that Squid will not update any content in the read-only cache directory. This feature is used very rarely. Time for action – adding a cache directory So far we have learned the meaning of different parameters used with the cache_dir directive. Let's see an example of the cache directory /squid_cache/ with 50GB of free space: cache_dir aufs /squid_cache/ 51200 32 512 We have a cache directory /squid_cache/ with 50 GB of free space with the values of L1 and L2 as 32 and 512 respectively. So, if we assume the average size of a cached object to be 16 KB, there will be 51200x1024÷(32x512x16) = 200 cached objects in each of the directories at the second level, which is quite good. What just happened? We added /squid_cache/ with a 50 GB free disk space as a cache directory to cache web documents on the hard disk. Following the previous instructions, we can add as many cache directories as we want, depending on the availability of space. Cache directory selection If we have specified multiple caching directories, we may need a more efficient algorithm to ensure optimal performance. For example, when under a heavy load, Squid will perform a lot of I/O transactions. In such cases, if the load is split across the directories, this will obviously result in low latency. Squid provides the directive store_dir_select_algorithm, which can be used to specify the way in which the cache directories should be used. It takes one of the values from least-load and round-robin. store_dir_select_algorithm least-load|round-robin If we want to distribute cached objects evenly across the caching directories, we should go for round-robin. If we want the best performance with least latency, we should certainly go for least-load, where Squid will try to pick the directory with the least I/O operations. Cache object size limits It is important to place limits on the size of the web documents which we are going to cache for achieving a better HIT ratio. Depending on the results we want to achieve, we may want to keep the maximum limit a bit higher than the default, which is 4 MB, which in turn depends on the size of the cache we have specified. For example, if we have a cache directory with a size of 10 GB and we set the maximum cacheable object size to 500 MB, there will be fewer objects in the cache and the HIT ratio will suffer significantly resulting in high latency. However, we shouldn't keep it really low either, as this will result in lots of I/O but fewer bandwidth savings. Squid provides two directives known as minimum_object_size and maximum_object_size to set the object size limits. The minimum size is 0 KB, by default, meaning that there is no lower limit on the object size. If we have a huge amount of storage dedicated to caching, we can set the maximum limit to something around 100 MB, which will make sure that the popular software, audio/video content, and so on, are also cached, which may lead to significant bandwidth savings. minimum_object_size 0 KB maximum_object_size 96 MB This configuration will set the minimum and maximum object size in the cache to 0 (zero) and 96 MB respectively, which means that objects with a size larger than 96 MB will not be cached. Setting limits on object replacement Over a period of time, the allocated space for the caching directories starts to fill up. Squid starts deleting cached objects from the cache once the occupied space by the objects crosses a certain threshold, which is determined by using the cache_swap_low and cache_swap_high directives. These directives take integral values between 0 and 100. cache_swap_low 96 cache_swap_high 97 So, in accordance with these values, when the space occupied for a cache directory crosses 96 percent, Squid will start deleting objects from the cache and will try to maintain the utilization near 96 percent. However, if the incoming rate is high and the space utilization starts to touch the high limit (97 percent), the deletion becomes quite frequent until utilization moves towards the lower limit. Squid's defaults for low and high limits are 90 percent and 95 percent respectively, which are good if the size of cache directory is low (like 10 GB). However, if we have a large amount of space for caching (such as a few hundreds GBs), we can push the limits a bit higher and closer because even 1 percent will mean a difference of more than a gigabyte. Cache replacement policies In the previous two sections, we learned about using the main memory and hard disks for caching web documents and how to configure Squid for optimal performance. As time passes, cache will start to fill and at some point in time, Squid will need to purge or delete old objects from the cache to make space for new ones. Removal of objects from the cache can be achieved in several ways. One of the simplest ways to do this is to start by removing the least recently used or least frequently used objects from the cache. Squid builds different removal or replacement policies on top of the list and heap data structures. Let's have a look at the different policies provided by Squid. Least recently used (LRU) Least recently used (lru) is the simplest removal policy built by Squid by default. Squid starts by removing the cached objects that are oldest (since the last HIT). The LRU policy utilizes the list data structure, but there is also a heap-based implementation of LRU known as heap lru. Greedy dual size frequency (GDSF) GDSF (heap GDSF) is a heap-based removal policy. In this policy, Squid tries to keep popular objects with a smaller size in the cache. In other words, if there are two cached objects with the same popularity, the object with the larger size will be purged so that we can make space for more of the less popular objects, which will eventually lead to a better HIT ratio. While using this policy, the HIT ratio is better, but overall bandwidth savings are small. Least frequently used with dynamic aging (LFUDA) LFUDA (heap LFUDA) is also a heap-based replacement policy. Squid keeps the most popular objects in the cache, irrespective of their size. So, this policy compromises a bit of the HIT ratio, but may result in better bandwidth savings compared to GDSF. For example, if a cached object with a large size encounters a HIT, it'll be equal to HITs for several small sized popular objects. So, this policy tries to optimize bandwidth savings instead of the HIT ratio. We should keep the maximum object size in the cache high if we use this policy to further optimize the bandwidth savings. Now we need to specify one of the policies which we have just learned, for cache replacement for the main memory caching as well as hard disk caching. Squid provides the directives memory_replacement_policy and cache_replacement_policy for specifying the removal policies. memory_replacement_policy lru cache_replacement_policy heap LFUDA These configuration lines will set the memory replacement policy to lru and the on disk cache replacement policy to heap LFUDA. Tuning Squid for enhanced caching Although Squid performs quite well with default caching options, we can tune it to perform even better, by not caching the unwanted web objects and caching a few non-cacheable web documents. This will achieve higher bandwidth savings and reduced latency. Let's have a look at a few techniques that can be helpful. Selective caching There may be cases when we don't want to cache certain web documents or requests from clients. The directive cache is very helpful in such cases and is very easy to use. Squid Proxy Server 3.1: Beginner's Guide Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid Published: February 2011 eBook Price: ₨831.00 Book Price: ₨1,386.00 See more Buy It Now Read more about this book (For more resources on Proxy Servers, see here.) Time for action – preventing the caching of local content If we don't want to cache responses for certain requests or clients, we can deny it using this option. The default behavior is to allow all cacheable responses to be cached. As servers in our local area network are close enough that we may not want to waste cache space on our proxy server by caching responses from these servers, we can selectively deny caching for responses from local servers. acl local_machines dst 192.0.2.0/24 198.51.100.0/24 cache deny local_machines This code will prevent responses from the servers in the networks 192.0.2.0/24 and 198.51.100.0/24 from being cached by the proxy server. What just happened? To optimize the performance (especially the HIT ratio), we have configured Squid not to cache the objects that are available on the local area network. We have also learned how to selectively deny caching of such content. Refresh patterns for cached objects Squid provides the directive refresh_pattern, using which we can control the status of a cached object. Using refresh_pattern to cache the non-cacheable responses or to alter the lifetime of the cached objects, may lead to unexpected behavior or responses from the web servers. We should use this directive very carefully. Refresh patterns can be used to achieve higher HIT ratios by keeping the recently expired objects fresh for a short period of time, or by overriding some of the HTTP headers sent by the web servers. While the cache directive can make use of ACLs, refresh_pattern uses regular expressions. The advantage of using the refresh_pattern directive is that we can alter the lifetime of the cached objects, while with the cache directive we can only control whether a request should be cached or not. Let's have a look at the syntax of the refresh_pattern directive: refresh_pattern [-i] regex min percent max [OPTIONS] The parameter regex should be a regular expression describing the request URL. A refresh pattern line is applied to any URL matching the corresponding regular expression. There can be multiple lines of refresh patterns. The first line, whose regular expression matches the current URL, is used. By default, the regular expression is case-sensitive, but we can use -i to make it case-insensitive. Some objects or responses from web servers may not carry an expiry time. Using the min parameter, we can specify the time (in minutes) for which the object or response should be considered fresh. The default and recommended value for this parameter is 0 because altering it may cause problems or unexpected behavior with dynamic web pages. We can use a higher value when we are absolutely sure that a website doesn't supply any dynamic content. The parameter percent determines the life of a cached object in the absence of the Expires headers. An object's life time is considered to be the difference between the times extracted from the Last-Modified and Date headers. So, if we set the value of percent to 50 , and the difference between the times from Last-Modified and Date headers is one hour, then the object will be considered fresh for the next 30 minutes. The response age is simply the time that has passed since the response was generated by the web server or was validated by the proxy server for the freshness. The ratio of the response age to the object life time is termed as lm-factor in the Squid world. Similarly the min, max parameters are the minimum and maximum times (in minutes) for which a cached object is considered fresh. If a cached object has spent more time in the cache than max, then it won't be considered fresh anymore. We should note that the Expires HTTP header overrides min and max values. Let's have a look at the method used for determining the freshness or staleness of a cached object. A cached object is considered: Stale (or expired), if the expiry time that was mentioned in the HTTP response header is in the past. Fresh, if the expiry time mentioned in the HTTP response headers is in the future. Stale, if response age is more than the max value. Fresh, if lm-factor is less than the percent value. Fresh, if the response age is less than the min value. Stale, otherwise. Time for action – calculating the freshness of cached objects Let's see an example of a refresh_pattern and try to calculate the freshness of an object: refresh_patten -i ^http://example.com/test.jpg$ 0 60% 1440 Let's say a client requested the image at http://example.com/text.jpg an hour ago, and the image was last modified (created) on the web server six hours ago. Let's assume that the web server didn't specify the expiry time. So, we have the following values for the different variables: At the time of the request, the object age was (6 - 1) = 5 hours. Currently, the response age is 1 hour. Currently, the lm-factor is 1÷5 = 20 percent Let's check whether the object is still fresh or not: The response age is 60 minutes, which is not more than 1440 (max value), so this can't be the deciding factor. lm-factor is 20 percent, which is less than 60 percent, so the object is still fresh. Now, let's calculate the time when the object will expire. The object age is 5 hours and percent value is 60 percent. So, object will expire in (5 x 60) ÷100 = 3 hours from the last request, that is, 2 hours from now. What just happened? We learned the formula for calculating the freshness or staleness of a cached object and also the time after which a cached object will expire. We also learned about specifying refresh patterns for the different content types to optimize performance. Options for refresh pattern Most of the time, the expiry time is specified by the web servers for all requests. But some web documents such as style sheets (CSS) or JavaScript (JS) files included on web page, change quite rarely and we can bump up their expiry time to a higher value to take full advantage of caching. As the web servers already specify the expiry time, the cached CSS/JS file will automatically expire. To forcibly ignore the Expires and a lot of other headers related to caching, we can pass options to the refresh_pattern directive. Let's have a look at the options available for the refresh_pattern directive and how they can help us improve the HIT ratio. Please be warned that using the following options violates HTTP standards and may also cause unexpected browsing problems. override-expire The option override-expire, overrides or ignores the Expires header, which is the main player for determining the expiry time of a cached response. As the Expires header is ignored, the values of the min, max, and percent parameters will play an essential role in determining the freshness of a response. override-lastmod The option override-lastmod will force Squid to ignore the Last-Modified header, which will eventually enforce the use of min value to determine the freshness of an object. This option is of no use, if we have set the value of min to zero. reload-into-ims Using the reload-into-ims option will force Squid to convert the no-cache directives in the HTTP headers to the If-Modified-Since headers. The use of this option is useful only when the Last-Modified header is present. ignore-reload Using the option ignore-reload will simply ignore the no-cache or reload directives present in the HTTP headers. ignore-no-cache When the option ignore-no-cache is used, Squid simply ignores the no-cache directive in the HTTP headers. ignore-no-store The HTTP header Cache-Control: no-store is used to tell clients that they are not allowed to store the data being transmitted. If the option ignore-no-store is set, Squid will simply ignore this HTTP header and will cache the response if it's cacheable. ignore-must-revalidate The HTTP header Cache-Control: must-revalidate means that the response must be revalidated with the originating web server before it's used again. Setting the option ignore-must-revalidate will enforce Squid to ignore this header. ignore-private Private information or sensitive data generally carries an HTTP header known as Cache-Control: private so that intermediate servers don't cache the responses. However, the option ignore-private can be used to ignore this header. ignore-auth If the option ignore-auth is set, then Squid will be able to cache the authorization requests. Using this option may be really risky. refresh-ims This option can be pretty useful. The option refresh-ims forces Squid to validate the cached object with the original server whenever an If-Modified-Since request header is received from a client. Using this may increase the latency, but the clients will always get the latest data. Let's see an example with these options: refresh_pattern -i .jpg$ 0 60% 1440 ignore-no-cache ignore-no-store reload-into-ims This code will force all the JPEG images to be cached whether the original servers want us to cache them or not. They will be refreshed only: If the Expires HTTP header was present and the expiry time is in past. If the Expires HTTP header was missing and the response age has exceeded the max value. Have a go hero – forcing the Google homepage to be cached for longer Write a refresh_pattern configuration that forces the Google homepage to be cached for six hours. Solution: refresh_pattern -i ^http:\/\/www\.google\.com\/$ 0 20% 360 override-expire override-lastmod ignore-reload ignore-no-cache ignore-no-store reload-into-ims ignore-must-revalidate Aborting the partial retrievals When a client initiates a request for fetching some data and aborts it prematurely, Squid may continue to try and fetch the data. This may cause bandwidth and other resources such as processing power and memory to be wasted, however if we get subsequent requests for the same object, it'll result in a better HIT ratio. To counter act this problem, Squid provides three directives quick_abort_min (KB), quick_abort_max (KB), and quick_abort_pct (percent). For all the aborted requests, Squid will check the values for these directives and will take the appropriate action according to the following rules: If the remaining data that should be fetched is less than the value of quick_abort_min, Squid will continue to fetch it. If the remaining data to be transferred is more than the value of quick_abort_max, Squid will immediately abort the request. If the data that has already been transferred is more than quick_abort_pct percent of the total data, then Squid will keep retrieving the data. Both the quick_abort_min and quick_abort_max values are in KiloBytes (KB) (or any allowed memory size unit) while quick_abort_pct is a percentage value. If we want to abort the requests in all cases, which may be required if we are short of bandwidth. We should set quick_abort_min and quick_abort_max to zero. If we have a lot of spare bandwidth, we can set a higher values for quick_abort_min and quick_abort_max, and a relatively low value for quick_abort_pct. Let's see an example for a high bandwidth case: quick_abort_min 1024 KB quick_abort_max 2048 KB quick_abort_pct 90 Caching the failed requests Requests for resources which doesn't exist (HTTP Error 404) or a client doesn't have permission to access the requested resource (HTTP Error 403) are common and requests to such resources make up a significant percentage of the total requests. These responses are cacheable by Squid. However, sometimes web servers don't send the Expires HTTP headers in responses, which prevents Squid from caching these responses. To solve this problem, Squid provides the directive negative_ttl that forces such responses to be cached for the time specified. The syntax of negative_ttl is as follows: negative_ttl TIME_UNITS Previously, this value was five minutes by default, but in the newer versions of Squid, it is set to zero seconds by default. Summary In this article we covered caching in the main memory and hard disk in detail. We learned about using RAM and disks for caching in an optimized manner to achieve higher HIT ratio. We saw how to fine tuning the cache. Further resources on this subject: VirtualBox 3.1: Beginner's Guide [Book] Squid Proxy Server: Tips and Tricks [Article] Squid Proxy Server 3: Getting Started [Article] How to Configure Squid Proxy Server [Article] Configuring Squid to Use DNS Servers [Article] Different Ways of Running Squid Proxy Server [Article] About the Author : Kulbir Saini Kulbir Saini is an entrepreneur based in Hyderabad, India. He has had extensive experience in managing systems and network infrastructure. Apart from his work as a freelance developer, he provides services to a number of startups. Through his blogs, he has been an active contributor of documentation for various open source projects, most notable being The Fedora Project and Squid. Besides computers, which his life practically revolves around, he loves travelling to remote places with his friends. For more details, please check http://saini.co.in/.

Script to make auto simple queue Mikrotik

setting otomatis bw script. tinggal copy script ini ke system-->script kemudian OK dan klik run. :for e from 5 to 254 do={ /queue simple add name="user $e" target-addresses="100.169.169.$e" max-limit=384000/384000 }

MC AFFEE uninstaller

Delete registry values: 1. Click Start, Run, type regedit and click OK. 2. Navigate to [HKEY_CLASSES_ROOT\*\ShellEx\ContextMenuHandlers\VirusScan]. 3. Right-click the VirusScan key and select Delete. 4. Navigate to [HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\Current Version\Run]. 5. Right-click the value ShStatEXE and select Delete. 6. Restart your computer. ‘McAfee Agent cannot be removed while it is in managed mode’ This made sense to me as McAfee was centrally managed from a server, though we were not given the admin password for the administration console by the previous IT support company. Solution: * Open CMII (admin rights needed) * Browse to C:Program FilesMcAfeeCommon Framework (Program Files (x86) if running a 64bit Operating System) * Run command – frminst.exe /remove=agent

Address Book Groupwise

Saat akan kirim ke alamat email, ada kendala tidak muncul completed address.
solution :
1. Buka email Groupwise
2. Tools - Address Book
3. File - Name Completion Search Order ...
4. Pastikan contrengan Disable Name Completion tidak tercontreng.
5 Selectdm books ( Novell Groupwise Address Book ) paling atas.

free NAS

Bagaimana cara membuat NAS Server sendiri??? Berikut cara untuk membuat NAS server . Untuk OSnya menggunakan aplikasi opensource FreeNAS yang berbasis FreeBSD.

freenas

Untuk membuat NAS Server dengan FreeNAS, tidak dibutuhkan komputer yang canggih, sebuah PC dengan memori 128MB sudah cukup untuk menjalankan FreeNAS ini. Tetapi kalo ada yang lebih bagus tentu kinerjanya lebih baik juga. FreeNAS mendukung protokol CIFS (Samba), FTP, NFS, RSYNC dan AFP. FreeNAS juga mendukung RAID (0,1, dan 5).

Sekedar sharing pengalaman instalasi FreeNAS, berikut langkah-langkah instalasi dan konfigurasi dasar untuk membuat NAS Server sendiri.

Komputer Server : Komputer yang dijadikan sebagai NAS Server.
Sepesifikasi : IBM eServer Pentium III 1266 MHz, Memori 256Mb + 128Mb, HD Samsung Spinpoint 40 GB.

Instalasi :
Instalasi FreeNAS dapat dilakukan dengan 2 cara :

Menggunakan LiveCD, konfigurasi disimpan di Floppy atau USB Disk.
Install FreeNAS pada Disk (HD/USB Disk/Memori Drive). Dalam tutorial ini kita akan menggunakan opsi ini.

Langkah-langkah instalasi :

Download dulu file iso FreeNAS di http://www.freenas.org/ pilih paket yang statusnya sudah stabil (stable). Saat tulisan ini saya buat saya menggunakan versi LiveCD 0.686.3 (release tgl. 13-03-2008 ). Setelah download selesai bakar ISO file tadi menggunakan Software CD Burning misalnya Nero.
Masukkan FreeNAS CD ke dalam CD-Rom dan booting komputer dari CD.
Tunggu hingga FreeNAS Console Setup Menu muncul (setelah bunyi beep ), FreeNAS Console Menu biasanya terhalang oleh semacam wallpaper untuk menghilangkan cukup tekan ENTER.
“Console setup”
“*********************”
1) Assign Interface
2) Set LAN IP address
3) Reset WebGUI password
4) Reset to factory defaults
5) Ping host
6) Shell
7) Reboot system
8) PowerOff system
9) Install to a hard drive/memory drive/USB Pen, etc.
Karena akan di install ke HD, pilih opsi 9.
“Install & Upgare”
“*********************”
1) Install ‘embeded’ OS on HD/Flash/USB
2) Install ‘embeded’ OS on HD/Flash/USB + data partition
3) Install ‘full’ OS on HDD + data partition
4) …
5) …
6) …Opsi 4,5,6 merupakan pilihan untuk upgrade.
Pilih nomor 3, opsi ini akan menginstall OS FreeNAS ke Hard Disk.FreeNAS ‘full installer for HDD

- create MBR partition 1, using UFS, costumizable size for OS
- create MBR partition 2, using UFS, for DATA
- …

WARNING : There will be some limitation ….
Perhatian! pilihan 3 akan menghapus seluruh partisi dan data dalam HD, jadi pastikan tidak ada data penting dalam HD yang akan digunakan. Klik OK untuk melanjutkan.
Tentukan besar partisi 1 untuk OS, minimalnya 64 MB. Anda bisa mengikuti nilai default (64 MB) atau memberi sedikit ruang (misalnya 128 MB) untuk tambahan modul di kemudian hari.
Script installer akan melakukan proses instalasi ke HD, sesaat kemudian (1-2 menit) instalasi akan selesai.
catatan :
Pilihan Install Full OS on HD secara otomatis membagi 2 partisi HD. Partisi 1 untuk OS dan partisi 2 untuk Data. Partisi kedua telah terformat secara otomatis, jadi tidak perlu lagi diformat untuk digunakan.Untuk menggunakan Partisi Data ini :
1. Tambahkan disk ad0 pada halaman “Disk” Management.
2. Tambahkan mount point pada halaman “Disk” Mountpoint
Gunakan parameter berikut :
Disk ad0, partisi 2, Filesystem UFS— Bagian ini akan dijelaskan lebih lanjut pada konfigurasi melalui WEBGUI —
Keluarkan CD FreeNAS dari CDROM Driver lalu Reboot komputer server.

Konfigurasi Awal
Setelah booting, Anda akan kembali ke FreeNAS Console Setup Menu.

“Console setup”
“*********************”
1) Assign Interface
2) Set LAN IP address
3) Reset WebGUI password
4) Reset to factory defaults
5) Ping host
6) Shell
7) Reboot system
8) PowerOff system
9) Install to a hard drive/memory drive/USB Pen, etc.

Secara otomatis (default), FreeNAS akan menggunakan NIC / LAN Card yang pertama (yang terdeteksi) dengan IP Address defaultnya adalah 192.168.1.250. Jika komputer server hanya mempunyai 1 NIC opsi nomor 1 tidak perlu dilakukan, karena umumnya LAN card secara otomatis akan terdeteksi.
Selanjutnya adalah mengatur IP Address dari NAS Server ini, pilih 2 kemudian akan muncul semacam pilihan untuk menggunakan DHCP atau tidak. Sebaiknya pilih untuk tidak menggunakan DHCP. Masukkan IP Address yang digunakan dalam jaringan (contoh 192.168.8.128 /24 , gateway dan DNS 192.168.x.254). Untuk IP v6 karena belum digunakan pilih saja opsi otomatis.
Setelah LAN setting, pilih nomor 5 dan lakukan ping ke komputer lain dalam jaringan untuk memastikan koneksi sudah OK.

Konfigurasi Dasar (Web Based)

Jika koneksi jaringan sudah OK, bisa ping ke komputer lain dari FreeNAS PC. Gunakan komputer lain untuk mengakses WebGUI dari FreeNAS ini. Ketikkan IP Address FreeNAS server (contoh : http://192.168.8.128 ) selanjutnya akan muncul dialog box untuk login seperti gambar berikut :

login

Pada kotak login/password masukkan :
1. Default Username : admin
2. Default password : freenas
3. Klik OK

WEBGUI Layout

webgui

Jika login berhasil (seharusnya berhasil karena masih menggunakan password default) Anda bisa melihat tampilan webGUI seperti diatas. Menu-menu untuk konfigurasi terdapat disebelah kiri.

Dibagian System-General Setup, Anda bisa mengatur nama untuk NAS server ini. Dalam beberapa modul terdapat kontrol yang berbentuk [+] , [x] dan [e].

[+] = Tambah elemen
[x] = Menghapus elemen
[e] = Edit elemen

Untuk konfigurasi dasar, yang perlu di setting agar NAS Server ini bisa online adalah di bagian Disks dan Services.

Konfigurasi Disks
Disk harus ditambahkan terlebih dahulu sebelum bisa di format dan di mount ke FreeNAS. Alur proses untuk konfigurasi sharing pada FreeNASi adalah :

1. Tambahan Disk (Add Disks)
2. Format Disks (jika diperlukan) menggunakan file system UFS
3. Mount Disk (Add Mount Point)
4. Enable Services (CIFS, FTP, dll.)

Untuk menambahkan Disk, Klik Managent (dari menu Disk). Pada halaman Disks/Management klik [+]

disk mng

Selanjutnya,
Tambah Disk

disk add

Pada pilihan Disk Add, gunakan parameter berikut :

Disk : ad0
Preformatted FS : Unformated
Untuk paramater lain gunakan saja setting default-nya.

Untuk Server NAS dengan HD lebih dari 1, ulangi langkah diatas untuk menambahkan disk.

Klik Apply Changes jika telah selesai menambahkan disk.

Format Disk
Untuk Server dengan 1 HD, opsi ini tidak perlu dilakukan karena pada saat instalasi partisi data telah diformat secara otomatis. Namun jika menggunakan HD lebih dari 1, HD yang kedua, dst harus di format.

disk format

Akan muncul konfirmasi untuk melakukan format disk :

disk format

Mounting Disk
Setelah diformat, disk harus di mount sebelum digunakan.

mount

Untuk Server dengan 1 HD, gunakan paramater berikut :

Disk : ad0
Partition : 2 ( karena server di install pada 1 HD, partisi 1 digunakan untuk OS )
File system : UFS
ShareName : misalnya DATA)
Description : Dataku (contoh)

Klik add, dan Apply Changes

Mengaktifkan Service pada FreeNAS
CIFS/SAMBA digunakan untuk digunakan dalam lingkungan windows, maksudnya agar folder yang telah dishare/mount bisa dikenali pada komputer windows (‘Network neighborhood’).

samba

Untuk parameternya isikan sebagai berikut :
Pastikan kotak enable diaktifkan
Authentification : Anonymous (semua orang dalam jaringan bisa mengakses FreeNAS)
Netbiosname : misalnya DATACENTER (Nama komputer yang akan muncul dalam windows neighbourhood)
Workgroup : nama workgroup (sesuaikan dengan nama Workgroup).

yang lainnya biarkan default. Klik Save dan Restart.

Shares
Klik [+] untuk menambahkan share

share edit

Parameter :
Name : Nama share misalnya DATA
Comment : Isi keterangan mengenai share ini misalnya Dataku
Path : path folder [gunakan tombol [...] untuk melihat daftar folder yang ada.

Klik SAVE

– OK –
Sampai disini konfigurasi dasar untuk FreeNAS telah selesai. Namun tentu saja masalah securiti nya masih belum dioptimalkan. Juga masih terdapat beberapa layanan yang belum diaktifkan tetapi saya yakin Anda bisa mempelajarinya sendiri.

Menguji dan menggunakan Share dari FreeNAS

Dari komputer lainnya dalam jaringan FreeNAS (sebagai contoh saya memakai windows XP Pro). Ketikkan pada windows explorer IP address dari NAS server (contoh : \\10.19.2.5). Klik OK

Share Folder pada NAS Server seharusnya akan terlihat dan sudah dapat di akses.

Share ini tersedia (dapat diakses) dalam jaringan dengan mode akses Read/Write (full access). Share juga dapat di map menjadi local drive untuk memudahkan menelusurinya. Anda bisa mencobanya dengan mengcopy file/folder ke share ini.

Tutorial lengkap tentang FreeNAS dapat Anda baca pada WIKI FreeNAS di situs resmi FreeNAS (http://www.freenas.org)

Akhlak Suami-Istri Keluarga Muslim

Oleh: Ali Akbar bin Agil

PROF. Dr. Sayyid Muhammad Al-Maliki, ulama besar dari kota Makkah, dalam bukunya Adabul Islam Fi Nidzaamil Usrah, mengetengahkan adab, etika, dan akhlak pasangan suami-istri dalam berkeluarga. Dalam bukunya dijelaskan tentang pentingnya akhlak pergaulan baik dari pihak suami maupun istri. Keduanya sama-sama memiliki kewajiban dan keharusan untuk menjadikan akhlak rumah tangga nabi sebagai pedoman paripurna.

Bagi seorang suami hal pertama yang wajib diketahui dalam mempergauli istri adalah mengedepankan sikap welas asih, cinta, dan kelembutan.

Dalam Al-Qur`an, Allah berfirman;

وَعَاشِرُوهُنَّ بِالْمَعْرُوفِ فَإِن كَرِهْتُمُوهُنَّ فَعَسَى أَن تَكْرَهُواْ شَيْئاً وَيَجْعَلَ اللّهُ فِيهِ خَيْراً كَثِيراً

“Dan bergaullah dengan mereka (para istri) secara patut, kemudian bila kamu tidak menyukai mereka, (maka bersabarlah) karena mungkin kamu tidak menyukai sesuatu, padahal Allah menjadikan padanya kebaikan yang banyak.” (Qs. An-Nisa` : 19)

Rasulullah Shallallahu ‘alaihi Wassalam bersabda, seperti diriwayatkan oleh Ibnu Majah,

“Sebaik-baik kalian adalah yang paling baik kepada keluarganya, dan aku adalah orang yang paling baik perlakuannya kepada keluargaku.”

Kedua, Sebagai seorang kepala keluarga, suami dianjurkan untuk memperlakukan istri dan anak-anaknya dengan kasih sayang dan menjauhkan diri dari sikap kasar.

Adakalanya seorang suami menjadi tokoh terpandang di tengah masyarakat, ia mampu dan pandai sekali berlemah lembut dalam tutur kata, sopan dalam perbuatan tapi gagal memperlakukan keluarganya sendiri dengan sikapnya saat berbicara kepada masyarkat.

Ketiga, seorang suami sangat membutuhkan pasokan kesabaran agar ia tangguh dalam menghadapi keadaan yang tidak mengenakkan. Suami tangguh adalah suami yang tidak mudah terpancing untuk lekas naik pitam saat melihat hal-hal yang kurang tepat demi cinta dan rasa sayangnya kepada istri.

Betapa sabarnya Rasulullah sebagai seorang suami dalam mengurusi para istrinya.
Begitu sabarnya, sampai-sampai sebagai sahabat beliau mengatakan, “Tidak pernah aku melihat seseorang yang lebih pengasih kepada keluarganya melebihi Rasulullah Shallallahu ‘alaihi Wassalam.” (HR. Muslim).

Contoh seorang suami yang penyayang lainnya dapat kita simak dari kisah Sayidina Umar bin Khaththab Ra. Beliau yang terkenal ketegasan dan sikap kerasnya dalam mengahadapi kemunkaran, pernah berkata saat didatangi oleh orang Badui yang akan mengadukan sikap cerewet istrinya. Di saat bersamaan, Umar pun baru saja mendapat omelan dari istri dengan suara yang cukup keras.

Umar memberi nasihat kepada si Badui, “Wahai saudaraku semuslim, aku berusaha menahan diri dari sikap (istriku) itu, karena dia memiliki hak-hak atas istriku. Aku berusaha untuk menahan diri meski sebenarnya aku bisa saya menyakitinya (bersikap keras) dan memarahinya. Akan tetapi, aku sadar bahwa tidak ada orang yang memuliakan mereka (kaum wanita), selain orang yang mulia dan tidak ada yang merendahkan mereka selain orang yang suka menyakiti. Aku sangat ingin menjadi orang yang mulia meski aku kalah (dari istriku), dan aku tidak ingin menjadi orang yang suka menyakiti meski aku termasuk orang yang menang.”

Umar meneruskan nasihatnya, “Wahai Saudaraku orang Arab, aku berusaha menahan diri, karena dia (istriku) memiliki hak-hak atas diriku. Dialah yang memasak makanan untukku, membuatkan roti untukku, membuatkan roti untukku, menyusui anak-anakku, dan mencucui baju-bajuku. Sebesar apapun kesabaranku terhadap sikapnya, maka sebanyak itulah pahala yang aku terima.”

Keempat, seorang suami hendaknya mampu mencandainya. Adanya canda dan tawa dalam kehidupan berumah tangga lazim selalu dilakukan. Bayangkan apa yang terjadi jika pasangan suami-istri melalui hari-harinya tanpa canda. Lambat laun rumah tangganya menjadi bak areal pemakaman yang sepi, senyap, hampa.

Suami yang ingin menunaikan hak-hak istrinya akan berusaha mengundang canda, gurauan, yang mencairkan suasana dengan senyum dan tawa; berusaha untuk bermain perlombaan dengan istri seperti yang dilakukan Rasulullah kepada istrinya Aisyah Ra.

Dalam diri setiap manusia terdapat sifat kekanak-kanakan, khususunya pada diri seorang wanita. Istri membutuhkan sikap manja dari suaminya dan karenanya jangan ada yang menghalangi sikap manja seorang suami untuk istrinya.

Maurice J. Elias Ph. D dkk dalam bukunya Emotionally Intelligent Parenting: How to Rise a Self-Disiplined, Responsible, Socially Skilled Child, menyinggung fungsi humor dalam proses kimiawi dan psikologis tubuh kita. “Humor kecil sehari-hari seperti vitamin ampuh untuk membangun dan mempertahankan kemampuan Anda secara positif menanggapi tugas-tugas keayahbundaan dan tantangan hidup lainnya.”

Menyisipkan humor dalam hubungan dengan pasangan dan anak-anak, menurut Maurice, dimaksudkan untuk menjaga agar kita tetap dalam kerangka berpikir optimis. “Cobalah melakukan hal-hal yang bisa membawa Anda ke dalam suasana humor setiap hari, meskipun hanya sebentar. Kalau tidak bisa setiap hari, coba sesering yang bisa Anda lakukan,” pesannya dalam buku yang telah dialih bahasakan berjudul Cara-cara Efektif Mengasuh Anak dengan EQ.

Akhlak Seorang Istri

Adapun kewajiban bagi pihak istri adalah tidak akan membebani suaminya dengan hal-hal yang tidak sanggup ia kerjakan dan tidak menuntut sesuatu yang lebih dari kebutuhan. Sikap ini dapat menjadi bantuan untuk suami dalam urusan finansial.

Alangkah mulianya seorang wanita yang berjiwa qana`ah, cermat dalam membelanjakan harta demi mencukupi suami dan anak-anaknya. Dahulu kala, para wanita kaum salaf memberi wejangan kepada suami atau ayahnya, “Berhatilah-hatilah engkau dari memperoleh harta yang tidak halal. Kami akan sanggup menahan rasa lapar namun kami tak akan pernah sanggup merasakan siksa api neraka.” Inilah akhlak pertama bagi pihak istri.

Kedua, istri shalihah adalah istri yang berbakti kepada suaminya, mendahulukan hak suami sebelum hak dirinya dan kerabat-kerabatnya. Termasuk dalam masalah taat kepada suami adalah berlaku baik pada ibu mertua.

Ketiga, istri sebagai guru pertama bagi anak-anak, hendaknya mendidik mereka dengan pendidikan yang baik, memperdengarkan kata-kata yang baik, mendoakan mereka dengan doa yang baik pula. Semuanya itu merupakan implementasi bakti istri kepada suaminya.

Keempat, karakter istri dengan adab baik adalah tidak mengadukan urusan rumah tangga dan mengungkit-ungkit perkara yang pernah membuat diri si istri sakit hati dalam pelbagai forum. Hal yang sering terjadi pada diri seorang wanita yaitu menceritakan keadaan buruk yang pernah menimpanya kepada orang lain. Seakan dengan menceritakan masalah yang melilit dirinya urusan akan terselesaikan. Namun yang terjadi sebaliknya, keburukan dan aib keluarga justru menjadi konsumsi orang banyak, nama baik suami dan keluarga terpuruk, dan jalan keluar tak kunjung ditemukan.

Bentuk adab kelima, tidak keluar dari rumahnya tanpa memperoleh izin terlebih dahulu dari suami. Mengenai hal ini, Nabi telah mewanti-wanti dengan bersabda, “Hendaknya seorang wanita (istri) tidak keluar dari rumah suaminya kecuali dengan seizin suami. Jika ia tetap melakukannya (keluar tanpa izin), Allah dan malaikat-Nya melaknati sampai ia bertaubat atau kembali pulang ke rumah.” (HR. Abu Dawud, Baihaqi, dan Ibnu `Asakir dari Abdullah bin Umar).

Demikian halnya dalam masalah ibadah non-wajib seperti puasa sunnah, hendaknya seorang istri tidak melakukannya kecuali setelah suami memberi izin.

Betapa indah kehidupan pasangan suami-istri yang menjadikan rumah tangga Nabi Muhammad Shallallahu ‘alaihi Wassalam sebagai titik singgung dalam menghidupkan hubungan harmonis. Tidak ada yang sempurna dari pribadi pria sebagai suami dan wanita sebagai istri. Kelebihan dan kekurangan pasti adanya. Suami-istri yang sadar antara hak dan kewajibannya akan melahirkan generasi penerus kehidupan manusia yang saleh, pribadi bertakwa, dan menjadikan ridha Allah sebagai tujuan utama.

Membina rumah tangga bahagia perlu keterampilan, kepandaian, dan kebijakan pengelolalnya. Masing-masing pasangan dituntut untuk pandai dan bijak mengelola rumah tangga keduanya, pandai dan bijak mengelola hubungan dengan buah hati mereka, pandai dan bijak mengatur waktu antara bekerja dan bercengkrama dengan pasangannya, pandai dan bijak mengelola keuangannya, bahkan pandai dan bijak mengelola cintanya.

Pengasuh Sebuah Majelis Ta`lim dan Ratib Al-Haddad di Malang

Bridge Filter - Blocking DHCP Traffic

Background

I've been working on implementing DHCP Relay throughout our network. However at times we have had problems with customer plugging their routers in backwards. They start handing out DHCP Leases to other customers, definitely annoying. I'm not taking credit for this idea, just putting it together what I found. I'm aware of setting the authoritative flag on the dhcp server.

This will put a stop to it:

Rule to block dhcp traffic originating from a 192.168.0.0/16 device, blocks normal router dhcp traffic from linksys or dlink products.

/interface bridge filter
add action=log chain=input comment="Block DHCP servers on 192.168.0.0/16" \
disabled=no dst-address=255.255.255.255/32 ip-protocol=udp log-prefix=\
"ALERT ROGUE DHCP (BLOCKED)" mac-protocol=ip src-address=192.168.0.0/16 \
src-port=67-68
add action=drop chain=input comment="Block DHCP servers on 192.168.0.0/16" \
disabled=no dst-address=255.255.255.255/32 ip-protocol=udp mac-protocol=\
ip src-address=192.168.0.0/16 src-port=67-68


You should also make sure that IP Firewall connection tracking is turned on. Add this rule to your core routers and access points where customers have the potential of plugging devices in backwards.

/interface bridge settings
set use-ip-firewall=yes use-ip-firewall-for-pppoe=no use-ip-firewall-for-vlan=yes

monitor hp

Melacak Posisi Seseorang Lewat Nomor Hp..gw uda nyobain...

1. Buka situs http://www.themobiletracker.com/english/index.html
2. Pilih lokasi negara nomor yang dituju
3. masukan nomor HP orang yang akan dituju
4. Tunggu beberapa saat. Satelit akan melacak dimana posisi pemegang HP
5. Situs akan menampilkan peta lokasi di mana pemegang HP berada (real time seperti Google Earth!)

bagi yang dah pernah nyoba reviewnya...menurut gw c keren.. jadi bisa tau dimana orang2 yang gw kenal..

Motherboard


Motherboard alias mainboard alias system board, ketiganya mengacu pada satu barang yang sama, yakni sebuah papan sirkuit dan panel-panel elektronik yang menggerakan system PC secara keseluruhan. Secara prinsip, sebuah motherboard terdiri atas beberapa bagian yakni system CPU (prosesor), sirkuit clock/timing, Ram, Cache, ROM BIOS, I/O port seperti port serial, port pararel, slot ekspansi, prot IDE.

Terutama sekali, sedikitnya ada 7 hal yang harus diperhatikan pada sebuah motherboard. Ketujuh komponen tersebut adalah :

Chipset
Tipe CPU
Slot dan tipe memori
Cache memory
Sistem BIOS
Slot ekspansi
Port I/O

Dari sinilah sesungguhnya problem pada sebuah system PC bisa dilacak atau dideteksi. Kerusakan di luar 7 komponen tersebut biasanya jarang terjadi. Kemungkinan yang lain, bila ketujuh komponen ini terlihat beres-beres saja, patut diduga bahwa masalahnya terletak pada arsitektur motherboard itu sendiri, entah sirkuit-sirkuitnya, atau komponen-komponen yang dipergunakannya.
Chipset : Komandan data dan proses

Disebut chipset karena barang satu ini umumnya merupakan sepasang chip yang mengendalikan prosesor dan fitur-fitur hardware yang ada pada mortherboard secara menyeluruh. Sepasang chip ini, yang satu buah disebut North Bright chip dan satu lagi dipanggil South Bridge chip, bisa dibilang merupakan panglima tertinggi pada sebuah system bernama motherboard.Saat ini, terdapat banyak motherboard dengan chipset yang berbeda-beda. Jenis chipset yang digunakan pada motherboard akan menentukan beberapa hal antara lain.

Tipe prosesor yang bias digunakan
Jenis memori yang bias mendukung system PC dan kapasitas maksimumnya
Kelengkapan I/O yang mampu disediakan
Tipe display adapter yang bisa digunakan
Lebar data pada motgherboarad yang bisa didukung
Ketersedian fitur-fitur tambahan (misalnya LAN, sound card, atau modem onboard).


Tipe CPU

Terdapat tiga tipe CPU yang banyak beredar di pasaran yakni CPU keluaran Intel Corporation, AMD keluaran Advanced Micro Device, dan Cyrix atau VIA C3 keluaran VIA Technologies Corporation. CPU alias prosesor keluaran VIA sendiri pada umumnya mengikuti platform teknologi yang dikeluarkan oleh Intel. Artinya, setiap seri prosesor yang dirilis VIA pada umumnya selalu memiliki kompatibilitas dengan seri prosesor yang dibuat Intel. Sementara AMD menggunakan platform teknologi yang berbeda dari yang digunakan oleh Intel, sekalipun teknologi pross yang digunakan oleh perusahaan ini juga mengikuti apa yang dilakukan Intel. Lantaran perbedaan platform ini, prosesor AMD menggunakan soket atau slot yang berbeda dari yang digunakan oleh Intel. Bila Intel menyebut Slot 1, AM menyebutnya Slot A. pada prosesor soket, belakangan AMD relative lebih konsisten dalam mengeluarakan tipe soket yang digunakan, yakni senantiasa menggunakan Soket A yang kompatibel pada seri kecepatan manapun, yakni soket dengan jumlah pin 462 buah. Bandingkan dengan Intel yang selalu berubah-ubah, dari soket 370 pin, kemudian menjadi 423 pin, lalu berubah lagi menjadi 478. akibatnya, kemungkinan untuk meng-upgrade sebuah prosesor Intel generasi baru selalu harus dibarengi dengan penggantian motherboard itu sendiri. Berikut adalah sedikit sejarah perkembangan prosesor Intel dan para clone-nya yang berhasil disarikan

Debut Intel dimulai dengan processor seri MCS4 yang merupakan cikal bakal dari prosesor i4040. Processor 4 bit ini yang direncanakan untuk menjadi otak calculator , pada tahun yang sama (1971), intel membuat revisi ke i440. Awalnya dipesan oleh sebuah perusahaan Jepang untuk pembuatan kalkulator , ternyata prosesor ini jauh lebih hebat dari yang diharapkan sehingga Intel membeli hak guna dari perusahaan Jepang tersebut untuk perkembangan dan penelitian lebih lanjut. Di sinilah cikal bakal untuk perkembangan ke arah prosesor komputer.
Berikutnya muncul processor 8 bit pertama i8008 (1972), tapi agak kurang disukai karena multivoltage.. lalu baru muncul processor i8080, disini ada perubahan yaitu jadi triple voltage, pake teknologi NMOS (tidak PMOS lagi), dan mengenalkan pertama kali sistem clock generator (pake chip tambahan), dikemas dalam bentuk DIP Array 40 pins. Kemudian muncul juga processor2 : MC6800 dari Motorola -1974, Z80 dari Zilog -1976 (merupakan dua rival berat), dan prosessor2 lain seri 6500 buatan MOST, Rockwell, Hyundai, WDC, NCR dst. Z80 full compatible dengan i8008 hanya sampai level bahasa mesin. Level bahasa rakitannya berbeda (tidak kompatibel level software). Prosesor i8080 adalah prosesor dengan register internal 8-bit, bus eksternal 8-bit, dan memori addressing 20-bit (dapat mengakses 1 MB memori total), dan modus operasi REAL.
Thn 77 muncul 8085, clock generatornya onprocessor, cikal bakalnya penggunaan single voltage +5V (implementasi s/d 486DX2, pd DX4 mulai +3.3V dst).
i8086, prosesor dengan register 16-bit, bus data eksternal 16-bit, dan memori addressing 20-bit. Direlease thn 78 menggunakan teknologi HMOS, komponen pendukung bus 16 bit sangat langka , sehingga harganya menjadi sangat mahal.
Maka utk menjawab tuntutan pasar muncul i8088 16bit bus internal, 8bit bus external. Sehingga i8088 dapat memakai komponen peripheral 8bit bekas i8008. IBM memilih chip ini untuk pebuatan IBM PC karena lebih murah daripada i8086. Kalau saja CEO IBM waktu itu tidak menyatakan PC hanyalah impian sampingan belaka, tentu saja IBM akan menguasai pasar PC secara total saat ini. IBM PC first release Agustus 1981 memiliki 3 versi IBM PC, IBM PC-Jr dan IBM PC-XT (extended technology). Chip i8088 ini sangat populer, sampai NEC meluncurkan sebuah chip yang dibangun berdasarkan spesifikasi pin chip ini, yang diberi nama V20 dan V30. NEC V20 dan V30 adalah processor yang compatible dengan intel sampai level bahasa assembly (software).

Chip 8088 dan 8086 kompatibel penuh dengan program yang dibuat untuk chip 8080, walaupun mungkin ada beberapa program yang dibuat untuk 8086 tidak berfungsi pada chip 8088 (perbedaan lebar bus)

Lalu muncul 80186 dan i80188.. sejak i80186, prosessor mulai dikemas dalam bentuk PLCC, LCC dan PGA 68 kaki.. i80186 secara fisik berbentuk bujursangkar dengan 17 kaki persisi (PLCC/LCC) atau 2 deret kaki persisi (PGA) dan mulai dari i80186 inilah chip DMA dan interrupt controller disatukan ke dalam processor. semenjak menggunakan 286, komputer IBM menggunakan istilah IBM PC-AT (Advanced Technology)dan mulai dikenal pengunaan istilah PersonalSystem (PS/1). Dan juga mulai dikenal penggunaan slot ISA 16 bit yang dikembangkan dari slot ISA 8 bit , para cloner mulai ramai bermunculan. Ada AMD, Harris & MOS yang compatible penuh dengan intel. Di 286 ini mulai dikenal penggunaan Protected Virtual Adress Mode yang memungkinkan dilakukannya multitasking secara time sharing (via hardware resetting).

Tahun 86 IBM membuat processor dengan arsitektur RISC 32bit pertama untuk kelas PC. Namun karena kelangkaan software, IBM RT PC ini “melempem” untuk kelas enterprise, RISC ini berkembang lebih pesat, setidaknya ada banyak vendor yang saling tidak kompatibel.

Lalu untuk meraih momentum yang hilang dari chip i8086, Intel membuat i80286, prosesor dengan register 16-bit, bus eksternal 16-bit, mode protected terbatas yang dikenal dengan mode STANDARD yang menggunakan memori addressing 24-bit yang mampu mengakses maksimal 16 MB memori. Chip 80286 ini tentu saja kompatibel penuh dengan chip-chip seri 808x sebelumnya, dengan tambahan beberapa set instruksi baru. Sayangnya chip ini memiliki beberapa bug pada desain hardware-nya, sehingga gagal mengumpulkan pengikut.
Pada tahun 1985, Intel meluncurkan desain prosesor yang sama sekali baru: i80386. Sebuah prosesor 32-bit , dalam arti memiliki register 32-bit, bus data eksternal 32-bit, dan mempertahankan kompatibilitas dengan prosesor generasi sebelumnya, dengan tambahan diperkenalkannya mode PROTECTED 32-BIT untuk memori addressing 32-bit, mampu mengakses maksimum 4 GB , dan tidak lupa tambahan beberapa instruksi baru. Chip ini mulai dikemas dalam bentuk PGA (pin Grid Array)

Prosesor Intel sampai titik ini belum menggunakan unit FPU secara
internal . Untuk dukungan FPU, Intel meluncurkan seri 80×87. Sejak 386 ini mulai muncul processor cloner : AMD, Cyrix, NGen, TI, IIT, IBM (Blue Lightning) dst, macam-macamnya : i80386 DX (full 32 bit)
i80386 SX (murah karena 16bit external)
i80486 DX (int 487)
i80486 SX (487 disabled)
Cx486 DLC (menggunakan MB 386DX, juga yang lain)
Cx486 SLC (menggunakan MB 386SX)
i80486DX2
i80486DX2 ODP
Cx486DLC2 (arsitektur MB 386)
Cx486SLC2 (arsitektur MB 386)
i80486DX4
i80486DX4 ODP
i80486SX2
Pentium
Pentium ODP

Sekitar tahun 1989 Intel meluncurkan i80486DX. Seri yang tentunya sangat populer, peningkatan seri ini terhadap seri 80386 adalah kecepatan dan dukungan FPU internal dan skema clock multiplier (seri i486DX2 dan iDX4), tanpa tambahan instruksi baru. Karena permintaan publik untuk prosesor murah, maka Intel meluncurkan seri i80486SX yang tak lain adalah prosesor i80486DX yang sirkuit FPU-nya telah disabled . Seperti yang seharusnya, seri i80486DX memiliki kompatibilitas penuh dengan set instruksi chip-chip seri sebelumnya.
AMD dan Cyrix kemudian membeli rancangan prosesor i80386 dan i80486DX untuk membuat prosesor Intel-compatible, dan mereka terbukti sangat berhasil. Pendapat saya inilah yang disebut proses ‘cloning’, sama seperti cerita NEC V20 dan V30. AMD dan Cyrix tidak melakukan proses perancangan vertikal (berdasarkan sebuah chip seri sebelumnya), melainkan berdasarkan rancangan chip yang sudah ada untuk membuat chip yang sekelas.
Tahun 1993, dan Intel meluncurkan prosesor Pentium. Peningkatannya terhadap i80486: struktur PGA yang lebih besar (kecepatan yang lebih tinggi , dan pipelining, TANPA instruksi baru. Tidak ada yang spesial dari chip ini, hanya fakta bahwa standar VLB yang dibuat untuk i80486 tidak cocok (bukan tidak kompatibel) sehingga para pembuat chipset terpaksa melakukan rancang ulang untuk mendukung PCI. Intel menggunakan istilah Pentium untuk meng”hambat” saingannya. Sejak Pentium ini para cloner mulai “rontok” tinggal AMD, Cyrix . Intel menggunakan istilah Pentium karena Intel kalah di pengadilan paten. alasannya angka tidak bisa dijadikan paten, karena itu intel mengeluarkan Pentium menggunakan TM. AMD + Cyrix tidak ingin tertinggal, mereka mengeluarkan standar Pentium Rating (PR) sebelumnya ditahun 92 intel sempat berkolaborasi degan Sun, namun gagal dan Intel sempat dituntut oleh Sun karena dituduh menjiplak rancangan Sun. Sejak Pentium, Intel telah menerapkan kemampuan Pipelining yang biasanya cuman ada diprocessor RISC (RISC spt SunSparc). Vesa Local Bus yang 32bit adalah pengembangan dari arsitektur ISA 16bit menggunakan clock yang tetap karena memiliki clock generator sendiri (biasanya >33Mhz) sedangkan arsitektur PCI adalah arsitektur baru yang kecepatan clocknya mengikuti kecepatan clock Processor (biasanya kecepatannya separuh kecepatan processor).. jadi Card VGA PCI kecepatannya relatif tidak akan sama di frekuensi MHz processor yang berbeda alias makin cepat MHz processor, makin cepat PCI-nya
Tahun 1995, kemunculan Pentium Pro. Inovasi disatukannya cache memori ke dalam prosesor menuntut dibuatnya socket 8 . Pin-pin prosesor ini terbagi 2 grup: 1 grup untuk cache memori, dan 1 grup lagi untuk prosesornya sendiri, yang tak lebih dari pin-pin Pentium yang diubah susunannya . Desain prosesor ini memungkinkan keefisienan yang lebih tinggi saat menangani instruksi 32-bit, namun jika ada instruksi 16-bit muncul dalam siklus instruksi 32-bit, maka prosesor akan melakukan pengosongan cache sehingga proses eksekusi berjalan lambat. Cuma ada 1 instruksi yang ditambahkan: CMOV (Conditional MOVe) .
Tahun 1996, prosesor Pentium MMX. Sebenarnya tidak lebih dari sebuah Pentium dengan unit tambahan dan set instruksi tambahan, yaitu MMX. Intel sampai sekarang masih belum memberikan definisi yang jelas mengenai istilah MMX. Multi Media eXtension adalah istilah yang digunakan AMD . Ada suatu keterbatasan desain pada chip ini: karena modul MMX hanya ditambahkan begitu saja ke dalam rancangan Pentium tanpa rancang ulang, Intel terpaksa membuat unit MMX dan FPU melakukan sharing, dalam arti saat FPU aktif MMX non-aktif, dan sebaliknya. Sehingga Pentium MMX dalam mode MMX tidak kompatibel dengan Pentium.

Bagaimana dengan AMD K5? AMD K5-PR75 sebenarnya adalah sebuah ‘clone’ i80486DX dengan kecepatan internal 133MHz dan clock bus 33MHz . Spesifikasi Pentium yang didapat AMD saat merancang K5 versi-versi selanjutnya dan Cyrix saat merancang 6×86 hanyalah terbatas pada spesifikasi pin-pin Pentium. Mereka tidak diberi akses ke desain aslinya. Bahkan IBM tidak mampu membuat Intel bergeming (Cyrix, mempunyai kontrak terikat dengan IBM sampai tahun 2005)Mengenai rancangan AMD K6, tahukah anda bahwa K6 sebenarnya adalah rancangan milik NexGen ? Sewaktu Intel menyatakan membuat unit MMX, AMD mencari rancangan MMX dan menambahkannya ke K6. Sayangnya spesifikasi MMX yang didapat AMD sepertinya bukan yang digunakan Intel, sebab terbukti K6 memiliki banyak ketidakkompatibilitas instruksi MMX dengan Pentium MMX.

Tahun 1997, Intel meluncurkan Pentium II, Pentium Pro dengan teknologi MMX yang memiliki 2 inovasi: cache memori tidak menjadi 1 dengan inti prosesor seperti Pentium Pro , namun berada di luar inti namun berfungsi dengan kecepatan processor. Inovasi inilah yang menyebabkan hilangnya kekurangan Pentium Pro (masalah pengosongan cache) Inovasi kedua, yaitu SEC (Single Edge Cartidge), Kenapa? Karena kita dapat memasang prosesor Pentium Pro di slot SEC dengan bantuan adapter khusus. Tambahan : karena cache L2 onprocessor, maka kecepatan cache = kecepatan processor, sedangkan karena PII cachenya di”luar” (menggunakan processor module), maka kecepatannya setengah dari kecepatan processor. Disebutkan juga penggunaan Slot 1 pada PII karena beberapa alasan :

Pertama, memperlebar jalur data (kaki banyak – Juga jadi alasan Socket 8), pemrosesan pada PPro dan PII dapat paralel. Karena itu sebetulnya Slot 1 lebih punya kekuatan di Multithreading / Multiple Processor. ( sayangnya O/S belum banyak mendukung, benchmark PII dual processorpun oleh ZDBench lebih banyak dilakukan via Win95 ketimbang via NT)Kedua, memungkinkan upgrader Slot 1 tanpa memakan banyak space di Motherboard sebab bila tidak ZIF socket 9 , bisa seluas Form Factor(MB)nya sendiri konsep hemat space ini sejak 8088 juga sudah ada .Mengapa keluar juga spesifikasi SIMM di 286? beberapa diantaranya adalah efisiensi tempat dan penyederhanaan bentuk.

Ketiga, memungkinkan penggunaan cache module yang lebih efisien dan dengan speed tinggi seimbang dengan speed processor dan lagi-lagi tanpa banyak makan tempat, tidak seperti AMD / Cyrix yang “terpaksa” mendobel L1 cachenya untuk menyaingi speed PII (karena L2-nya lambat) sehingga kesimpulannya AMD K6 dan Cyrix 6×86 bukan cepat di processor melainkan cepat di hit cache! Sebab dengan spec Socket7 kecepatan L2 cache akan terbatas hanya secepat bus data / makin lambat bila bus datanya sedang sibuk, padahal PII direncanakan beroperasi pada 100MHz (bukan 66MHz lagi). Point inilah salah satu alasan kenapa intel mengganti chipset dari 430 ke 440 yang berarti juga harus mengganti Motherboard.