`
duming115
  • 浏览: 113064 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

HBase 在Hadoop中的目录结构及文件内容(待补充)

阅读更多
1. HBase根目录以下以"/hbase"作为默认目录
  由hbase-default.xml中的"hbase.rootdir"定义,在hadoop中保存的推荐目录为hdfs://namenode:9000/hbase
2. HBase root 目录  ,用户表目录例如KeySpace1表下的Standard1 column family,+ 表示未打开的目录,- 表示已经打开的目录,文件没有+-标识.
  - /hbase -- 对应HRegionServer和HMaster 中的rootDir
 
      - /-ROOT-  -- -ROOT- 目录,在HMaster启动时如果不存在则创建.
          - /70236052
              + /.tmp
              - /info
                  5888818227459135066 -- HFile文件(请看片段)
              + /.oldlogs
             
              .regioninfo  
      - /.META.
          - /1028785192
              + /.oldlogs
              + /.tmp
              + /historian
              - /info
                  9039007746585873905 -- HFile文件 
      + /.logs
      + /.oldlogs
      hbase.version -- (hbase的版本信息7)
     
      - /KeySpace1 --对应HRegion tableDir目录.
          + /050f2de16c7a4c82af31c54dd82f8eb0 -- (HRegion.regiondir,也对应HRegionInfo.encodedName)
          + /1408bb740e20caa1188f041342a8036c
          ... (其它的region)
         
          - /9fdee9a88b36e4243577506d2a73f2bc  -- (对应HRegion中的regiondir,(这个值为encodedName is a hex version of the MD5 hash
                of table          name,startkey,regionIdTimestamp  具体参考HRegionInfo.java )
              .regioninfo  -- (当前region的信息,由HRegion.checkRegioninfoOnFilesystem()方法来检验及创建.保存的是regionInfo的信息.
              .tmp -- (HRegion 打开时删除,MemStore 中的缓存数据,写入到HFile文件时,先写入到.tmp文件中.写入完成后将这个文件
                  rename成./Standard1/4636404750291104726)
             
              - /splits -- (在region 被分割时创建,HRegion.splitRegion(byte[] splitRow)方法,在split后的HRegion中该目录为tableDir.
                    initialize(final Progressable reporter)方法会删除这一目录)
                  + /9fdee9a88b36e4243577506d2a73f2bd -- (splitA)
                  - /afdee9a88b36e4243577506d2a73f2bc -- (splitB)这是一个临时的目录,这个目录将会重命名为/hbase/KeySpace1/
                        afdee9a88b36e4243577506d2a73f2bc目录.HRegion.splitRegion(byte[] splitRow) 内.
                      - /Standard2 -- (columnfamily)
                          4636404750291104726.9fdee9a88b36e4243577506d2a73f2bc -- (其它的StoreFile对应的文件)
                          3277800383197646884.9fdee9a88b36e4243577506d2a73f2bc -- (Reference文件,
                              名称由StoreFile的名称+.+encodedRegionName)组成,由StoreFile.split(FileSystem fs, Path splitDir,
                              StoreFile f, byte[] splitRow, Range range)方法生成.文件中保存的是Refence信息,StoreFile来检测file文件是否符合Reference的格式,并打开这个文件,加载之前文件的)
                     
              + /Standard1 -- (其它的columnfamily,每个columnFamily对应一个Store)
              - /Standard2 (对应Store中的homedir(Path),storeNameStr(String) )
                  3277800383197646884 (对应StoreFile path(Path)目录.这里的文件名是random Long值.
                  4636404750291104726


3. ZooKeeper下的目录结构
  - /hbasehbase  -- 默认目录 由配置文件conf zookeeper.znode.parent属性指定,默认为/hbase,具体参考ZooKeeperWrapper类.
 
      root-region-server -- root region server 的地址和端口号例如192.168.1.24:60020, 由HMaster管理,停止HMaster后,该文件仍然存在.
      master -- master 地址,由HMaster 启动时创建,关闭时删除,内容192.168.1.93:60000
      shutdown -- 内容up,??
     
      - /rs -- region server 的父一级目录.
          1286590273089 -- region server 的启动的时间戳.内容192.168.1.28:60020
         
         
4. 部分文件片段
  4.1 .regioninfo
    user271456200+L?f?GKeyspace1,user196287945,1285478901492.9fdee9a88b36e4243577506d2a73f2bc.
user196287945  Keyspace1IS_ROOTfalseIS_METAfalse  Standard1 BLOOMFILTERNONEREPLICATION_SCOPE0 COMPRESSIONNONEVERSIONS1TTL86400000  BLOCKSIZE65536 IN_MEMORYtrue
BLOCKCACHEtrue?#P?

REGION => {NAME => 'Keyspace1,user196287945,1285478901492.9fdee9a88b36e4243577506d2a73f2bc.', STARTKEY => 'user196287945', ENDKEY => 'user271456200', ENCODED => 9fdee9a88b36e4243577506d2a73f2bc, TABLE => {{NAME => 'Keyspace1', FAMILIES => [{NAME => 'Standard1', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => '86400000', BLOCKSIZE => '65536', IN_MEMORY => 'true', BLOCKCACHE => 'true'}]}}

  4.2 /hbase/-ROOT-/70236052/info/5888818227459135066 HFile文件
      DATABLK*#? .META.,,1inforegioninfo+?? .META.,,1.META.IS_ROOTfalseIS_METAtrue  historian BLOOMFILTERNONEREPLICATION_SCOPE0 COMPRESSIONNONEVERSIONS
2147483647TTL604800 BLOCKSIZE8192  IN_MEMORYfalse
BLOCKCACHEfalseinfo BLOOMFILTERNONEREPLICATION_SCOPE0 COMPRESSIONNONEVERSIONS10TTL
2147483647  BLOCKSIZE8192  IN_MEMORYtrue
BLOCKCACHEtrueP?l  .META.,,1infoserver+???datanode-2:60020( .META.,,1infoserverstartcode+???+??MAJOR_COMPACTION_KEY?MAX_SEQ_ID_KEYhfile.AVG_KEY_LEN#hfile.AVG_VALUE_LEN?hfile.COMPARATOR2org.apache.hadoop.hbase.KeyValue$RootKeyComparator
hfile.LASTKEY( .META.,,1infoserverstartcode+???IDXBLK)+v# .META.,,1inforegioninfo+??TRABLK"$vZv

  4.3 /hbase/-ROOT-/70236052/.regioninfo root的regioninfo信息
    -ROOT-,,0-ROOT-IS_ROOTtrueIS_METAtrueinfo BLOOMFILTERNONEREPLICATION_SCOPE0 COMPRESSIONNONEVERSIONS10TTL
2147483647  BLOCKSIZE8192  IN_MEMORYfalse
BLOCKCACHEfalse?V?J

REGION => {NAME => '-ROOT-,,0', STARTKEY => '', ENDKEY => '', ENCODED => 70236052, TABLE => {{NAME => '-ROOT-', IS_ROOT => 'true', IS_META => 'true', FAMILIES => [{NAME => 'info', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => '2147483647', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'}]}}

  4.4 /hbase/.META./1028785192/info/903900774658587390 -- .META.中存放的有关User Table 的信息.
  DATABLK*T:Keyspace1,,1286763569270.97c8936c63fd10a333f91157460754db.inforegioninfo+??+??v:Keyspace1,,1286763569270.97c8936c63fd10a333f91157460754db. Keyspace1IS_ROOTfalseIS_METAfalse  Standard1 BLOOMFILTERROWREPLICATION_SCOPE0VERSIONS1 COMPRESSIONNONETTL86400000 BLOCKSIZE65536 IN_MEMORYtrue
BLOCKCACHEtrue  Standard2 BLOOMFILTERROWREPLICATION_SCOPE0VERSIONS1 COMPRESSIONNONETTL86400000 BLOCKSIZE256 IN_MEMORYtrue
BLOCKCACHEtruer?y?P:Keyspace1,,1286763569270.97c8936c63fd10a333f91157460754db.infoserver+???datanode-4:60020Y:Keyspace1,,1286763569270.97c8936c63fd10a333f91157460754db.infoserverstartcode+???+??MAJOR_COMPACTION_KEY?MAX_SEQ_ID_KEYhfile.AVG_KEY_LENThfile.AVG_VALUE_LEN?hfile.COMPARATOR2org.apache.hadoop.hbase.KeyValue$MetaKeyComparator
hfile.LASTKEYY:Keyspace1,,1286763569270.97c8936c63fd10a333f91157460754db.infoserverstartcode+???IDXBLK)+5T:Keyspace1,,1286763569270.97c8936c63fd10a333f91157460754db.inforegioninfo+??TRABLK"$5J5

5. hbase shell
 
  5.1 hbase(main):003:0> scan '-ROOT-'
ROW                                COLUMN+CELL                                                                                     
.META.,,1                         column=info:regioninfo, timestamp=1286762316984, value=REGION => {NAME => '.META.,,1', STARTKEY =
                                   > '', ENDKEY => '', ENCODED => 1028785192, TABLE => {{NAME => '.META.', IS_META => 'true', FAMILI
                                   ES => [{NAME => 'historian', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '214748
                                   3647', COMPRESSION => 'NONE', TTL => '604800', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCA
                                   CHE => 'false'}, {NAME => 'info', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '1
                                   0', COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCA
                                   CHE => 'true'}]}}                                                                               
.META.,,1                         column=info:server, timestamp=1287129838697, value=datanode-3:60020                             
.META.,,1                         column=info:serverstartcode, timestamp=1287129838697, value=1287129832854 

5.2 scan '.META.',{LIMIT => 2}
ROW                                COLUMN+CELL                                                                                     
Keyspace1,,1286872546310.275bb305 column=info:regioninfo, timestamp=1286872546523, value=REGION => {NAME => 'Keyspace1,,12868725463
a270083cd5a22cb007c0aea9.         10.275bb305a270083cd5a22cb007c0aea9.', STARTKEY => '', ENDKEY => 'user1010706264', ENCODED => 275
                                   bb305a270083cd5a22cb007c0aea9, TABLE => {{NAME => 'Keyspace1', MAX_FILESIZE => '16384', MEMSTORE_
                                   FLUSHSIZE => '2048', FAMILIES => [{NAME => 'Standard1', BLOOMFILTER => 'ROW', REPLICATION_SCOPE =
                                   > '0', VERSIONS => '1', COMPRESSION => 'NONE', TTL => '86400000', BLOCKSIZE => '4096', IN_MEMORY
                                   => 'true', BLOCKCACHE => 'true'}, {NAME => 'Standard2', BLOOMFILTER => 'ROW', REPLICATION_SCOPE =
                                   > '0', VERSIONS => '1', COMPRESSION => 'NONE', TTL => '86400000', BLOCKSIZE => '4096', IN_MEMORY
                                   => 'true', BLOCKCACHE => 'true'}]}}                                                             
Keyspace1,,1286872546310.275bb305 column=info:server, timestamp=1287129843536, value=datanode-3:60020                             
a270083cd5a22cb007c0aea9.                                                                                                         
Keyspace1,,1286872546310.275bb305 column=info:serverstartcode, timestamp=1287129843536, value=1287129832854                       
a270083cd5a22cb007c0aea9.                                                                                                         
Keyspace1,user1010706264,12868725 column=info:regioninfo, timestamp=1286872574444, value=REGION => {NAME => 'Keyspace1,user10107062
74086.323fe8155461c7e24fa9294b08f 64,1286872574086.323fe8155461c7e24fa9294b08fbd6bb.', STARTKEY => 'user1010706264', ENDKEY => 'use
bd6bb.                            r1026581438', ENCODED => 323fe8155461c7e24fa9294b08fbd6bb, TABLE => {{NAME => 'Keyspace1', MAX_FI
                                   LESIZE => '16384', MEMSTORE_FLUSHSIZE => '2048', FAMILIES => [{NAME => 'Standard1', BLOOMFILTER =
                                   > 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', TTL => '86400000', BLO
                                   CKSIZE => '4096', IN_MEMORY => 'true', BLOCKCACHE => 'true'}, {NAME => 'Standard2', BLOOMFILTER =
                                   > 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', TTL => '86400000', BLO
                                   CKSIZE => '4096', IN_MEMORY => 'true', BLOCKCACHE => 'true'}]}}                                 
Keyspace1,user1010706264,12868725 column=info:server, timestamp=1287129842825, value=datanode-1:60020                             
74086.323fe8155461c7e24fa9294b08f                                                                                                 
bd6bb.                                                                                                                            
Keyspace1,user1010706264,12868725 column=info:serverstartcode, timestamp=1287129842825, value=1287129832857                       
74086.323fe8155461c7e24fa9294b08f                                                                                                 
bd6bb.                             


 
           
分享到:
评论

相关推荐

    hbase-hadoop2-compat-1.2.12-API文档-中文版.zip

    赠送jar包:hbase-hadoop2-compat-1.2.12.jar; 赠送原API文档:hbase-hadoop2-compat-1.2.12-javadoc.jar; 赠送源代码:hbase-hadoop2-...人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。

    hbase-hadoop-compat-1.1.3-API文档-中文版.zip

    赠送jar包:hbase-hadoop-compat-1.1.3.jar; 赠送原API文档:hbase-hadoop-compat-1.1.3-javadoc.jar; 赠送源代码:hbase-hadoop-...人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。

    hbase-hadoop2-compat-1.1.3-API文档-中文版.zip

    赠送jar包:hbase-hadoop2-compat-1.1.3.jar; 赠送原API文档:hbase-hadoop2-compat-1.1.3-javadoc.jar; 赠送源代码:hbase-hadoop2-...人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。

    hbase-hadoop-compat-1.1.3-API文档-中英对照版.zip

    赠送jar包:hbase-hadoop-compat-1.1.3.jar; 赠送原API文档:hbase-hadoop-compat-1.1.3-javadoc.jar;...人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。 双语对照,边学技术、边学英语。

    Hbase和Hadoop JMX监控实战

    Hbase和Hadoop JMX监控实战

    hbase-hadoop2-compat-1.1.3-API文档-中英对照版.zip

    赠送jar包:hbase-hadoop2-compat-1.1.3.jar; 赠送原API文档:hbase-hadoop2-compat-1.1.3-javadoc.jar...人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。 双语对照,边学技术、边学英语。

    hadoop+hbase+zookeeper集群配置流程及文件

    hadoop集群配置流程以及用到的配置文件,hadoop2.8.4、hbase2.1.0、zookeeper3.4.12

    hbase-hadoop-compat-1.2.12-API文档-中文版.zip

    赠送jar包:hbase-hadoop-compat-1.2.12.jar; 赠送原API文档:hbase-hadoop-compat-1.2.12-javadoc.jar; 赠送源代码:hbase-hadoop-...人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。

    hbase-hadoop-compat-1.4.3-API文档-中文版.zip

    赠送jar包:hbase-hadoop-compat-1.4.3.jar; 赠送原API文档:hbase-hadoop-compat-1.4.3-javadoc.jar; 赠送源代码:hbase-hadoop-...人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。

    hbase-hadoop-compat-1.2.12-API文档-中英对照版.zip

    赠送jar包:hbase-hadoop-compat-1.2.12.jar; 赠送原API文档:hbase-hadoop-compat-1.2.12-javadoc.jar...人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。 双语对照,边学技术、边学英语。

    hbase-hadoop2-compat-1.2.12-API文档-中英对照版.zip

    赠送jar包:hbase-hadoop2-compat-1.2.12.jar; 赠送原API文档:hbase-hadoop2-compat-1.2.12-javadoc....人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。 双语对照,边学技术、边学英语。

    hbase和hadoop数据块损坏处理

    介绍hbase和hadoop数据块损坏如何处理

    hbase-hadoop2-compat-1.4.3-API文档-中文版.zip

    赠送jar包:hbase-hadoop2-compat-1.4.3.jar; 赠送原API文档:hbase-hadoop2-compat-1.4.3-javadoc.jar; 赠送源代码:hbase-hadoop2-...人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。

    在hadoop-3.1.2上安装hbase-2.2.1.pdf

    本文将HBase-2.2.1安装在Hadoop-3.1.2上,关于Hadoop-3.1.2的安装,请参见《基于zookeeper-3.5.5安装hadoop-3.1.2》一文。安装环境为64位CentOS-Linux 7.2版本。 本文将在HBase官方提供的quickstart.html文件的指导...

    实验三:熟悉常用的HBase操作

    (1)理解HBase在Hadoop体系结构中的角色。(2)熟练使用HBase操作常用的 Shell命令。(3)熟悉HBase操作常用的 Java API。 A.3.2 实验平台 (1)操作系统:Linux。 (2)Hadoop 版本:2.7.1或以上版本。(3)HBase版本:...

    hadoop2.6及hbase0.96伪分布式安装配置文件

    hadoop2.6及hbase0.96伪分布式安装配置文件

    Hadoop之HBase简介

    HBase – Hadoop Database,是一个高可靠性、高性能、面向列、可伸缩的分布式存储系统,利用HBase技术可在廉价PC Server上搭建起大规模结构化存储集群。 HBase是Google Bigtable的开源实现,类似Google Bigtable利用...

    大数据云计算技术系列 Hadoop之Hbase从入门到精通(共243页).pdf

    上图描述了Hadoop EcoSystem中的各层系统,其中HBase位于结构化存储层,Hadoop HDFS为HBase提供了高可靠性的底层存储支持,Hadoop MapReduce为HBase提供了高性能的计算能力,Zookeeper为HBase提供了稳定服务和...

    Hadoop+Hbase技术使用文档(整理过的)

    Hadoop+Hbase技术使用文档 1 目的 3 2 运行环境要求 4 2.1 支持的平台 4 2.2 硬件环境 4 2.3 软件环境 4 2.4 其他要求 5 3 安装jdk 5 3.1 查看本机的jdk版本 5 3.2 卸载低版本jdk 5 3.3 安装jdk 6 3.4 配置JDK1.6.0_...

    大数据云计算技术系列 Hadoop之Hbase简介(共19页).pdf

    History started by chad walters and jim 2006.11 G release paper on BigTable 2007.2 inital HBase prototype created as Hadoop contrib 2007.10 First useable ...下面一幅图是Hbase在Hadoop Ecosystem中的位置。

Global site tag (gtag.js) - Google Analytics